00:00:00.001 Started by upstream project "autotest-spdk-v24.09-vs-dpdk-v23.11" build number 144 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3645 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.190 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.191 The recommended git tool is: git 00:00:00.191 using credential 00000000-0000-0000-0000-000000000002 00:00:00.193 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.223 Fetching changes from the remote Git repository 00:00:00.224 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.257 Using shallow fetch with depth 1 00:00:00.257 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.257 > git --version # timeout=10 00:00:00.280 > git --version # 'git version 2.39.2' 00:00:00.280 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.302 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.302 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.227 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.240 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.252 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.252 > git config core.sparsecheckout # timeout=10 00:00:07.263 > git read-tree -mu HEAD # timeout=10 00:00:07.279 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.301 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.301 > git rev-list --no-walk 6d4840695fb479ead742a39eb3a563a20cd15407 # timeout=10 00:00:07.412 [Pipeline] Start of Pipeline 00:00:07.426 [Pipeline] library 00:00:07.428 Loading library shm_lib@master 00:00:07.428 Library shm_lib@master is cached. Copying from home. 00:00:07.444 [Pipeline] node 00:00:07.457 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:07.459 [Pipeline] { 00:00:07.471 [Pipeline] catchError 00:00:07.472 [Pipeline] { 00:00:07.483 [Pipeline] wrap 00:00:07.490 [Pipeline] { 00:00:07.497 [Pipeline] stage 00:00:07.499 [Pipeline] { (Prologue) 00:00:07.516 [Pipeline] echo 00:00:07.517 Node: VM-host-SM9 00:00:07.523 [Pipeline] cleanWs 00:00:07.532 [WS-CLEANUP] Deleting project workspace... 00:00:07.532 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.539 [WS-CLEANUP] done 00:00:07.755 [Pipeline] setCustomBuildProperty 00:00:07.828 [Pipeline] httpRequest 00:00:08.481 [Pipeline] echo 00:00:08.482 Sorcerer 10.211.164.20 is alive 00:00:08.491 [Pipeline] retry 00:00:08.493 [Pipeline] { 00:00:08.507 [Pipeline] httpRequest 00:00:08.511 HttpMethod: GET 00:00:08.511 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.512 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.523 Response Code: HTTP/1.1 200 OK 00:00:08.524 Success: Status code 200 is in the accepted range: 200,404 00:00:08.524 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.787 [Pipeline] } 00:00:11.804 [Pipeline] // retry 00:00:11.814 [Pipeline] sh 00:00:12.096 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.112 [Pipeline] httpRequest 00:00:12.472 [Pipeline] echo 00:00:12.474 Sorcerer 10.211.164.20 is alive 00:00:12.487 [Pipeline] retry 00:00:12.490 [Pipeline] { 00:00:12.512 [Pipeline] httpRequest 00:00:12.517 HttpMethod: GET 00:00:12.518 URL: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:12.518 Sending request to url: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:12.541 Response Code: HTTP/1.1 200 OK 00:00:12.542 Success: Status code 200 is in the accepted range: 200,404 00:00:12.543 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:01:03.877 [Pipeline] } 00:01:03.896 [Pipeline] // retry 00:01:03.904 [Pipeline] sh 00:01:04.184 + tar --no-same-owner -xf spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:01:06.730 [Pipeline] sh 00:01:07.010 + git -C spdk log --oneline -n5 00:01:07.010 b18e1bd62 version: v24.09.1-pre 00:01:07.010 19524ad45 version: v24.09 00:01:07.010 9756b40a3 dpdk: update submodule to include alarm_cancel fix 00:01:07.010 a808500d2 test/nvmf: disable nvmf_shutdown_tc4 on e810 00:01:07.010 3024272c6 bdev/nvme: take nvme_ctrlr.mutex when setting keys 00:01:07.030 [Pipeline] withCredentials 00:01:07.040 > git --version # timeout=10 00:01:07.053 > git --version # 'git version 2.39.2' 00:01:07.066 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:07.068 [Pipeline] { 00:01:07.078 [Pipeline] retry 00:01:07.081 [Pipeline] { 00:01:07.098 [Pipeline] sh 00:01:07.378 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:07.389 [Pipeline] } 00:01:07.407 [Pipeline] // retry 00:01:07.413 [Pipeline] } 00:01:07.429 [Pipeline] // withCredentials 00:01:07.440 [Pipeline] httpRequest 00:01:07.817 [Pipeline] echo 00:01:07.819 Sorcerer 10.211.164.20 is alive 00:01:07.830 [Pipeline] retry 00:01:07.832 [Pipeline] { 00:01:07.848 [Pipeline] httpRequest 00:01:07.852 HttpMethod: GET 00:01:07.853 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:07.854 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:07.860 Response Code: HTTP/1.1 200 OK 00:01:07.860 Success: Status code 200 is in the accepted range: 200,404 00:01:07.861 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:29.253 [Pipeline] } 00:01:29.270 [Pipeline] // retry 00:01:29.278 [Pipeline] sh 00:01:29.559 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:30.950 [Pipeline] sh 00:01:31.230 + git -C dpdk log --oneline -n5 00:01:31.231 eeb0605f11 version: 23.11.0 00:01:31.231 238778122a doc: update release notes for 23.11 00:01:31.231 46aa6b3cfc doc: fix description of RSS features 00:01:31.231 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:31.231 7e421ae345 devtools: support skipping forbid rule check 00:01:31.249 [Pipeline] writeFile 00:01:31.264 [Pipeline] sh 00:01:31.546 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:31.558 [Pipeline] sh 00:01:31.840 + cat autorun-spdk.conf 00:01:31.840 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.840 SPDK_TEST_NVMF=1 00:01:31.840 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:31.840 SPDK_TEST_URING=1 00:01:31.840 SPDK_TEST_VFIOUSER=1 00:01:31.840 SPDK_TEST_USDT=1 00:01:31.840 SPDK_RUN_UBSAN=1 00:01:31.840 NET_TYPE=virt 00:01:31.840 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:31.840 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:31.840 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:31.847 RUN_NIGHTLY=1 00:01:31.849 [Pipeline] } 00:01:31.863 [Pipeline] // stage 00:01:31.879 [Pipeline] stage 00:01:31.881 [Pipeline] { (Run VM) 00:01:31.894 [Pipeline] sh 00:01:32.175 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:32.175 + echo 'Start stage prepare_nvme.sh' 00:01:32.175 Start stage prepare_nvme.sh 00:01:32.175 + [[ -n 4 ]] 00:01:32.175 + disk_prefix=ex4 00:01:32.175 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:32.175 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:32.175 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:32.175 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:32.175 ++ SPDK_TEST_NVMF=1 00:01:32.175 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:32.175 ++ SPDK_TEST_URING=1 00:01:32.175 ++ SPDK_TEST_VFIOUSER=1 00:01:32.175 ++ SPDK_TEST_USDT=1 00:01:32.175 ++ SPDK_RUN_UBSAN=1 00:01:32.176 ++ NET_TYPE=virt 00:01:32.176 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:32.176 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:32.176 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:32.176 ++ RUN_NIGHTLY=1 00:01:32.176 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:32.176 + nvme_files=() 00:01:32.176 + declare -A nvme_files 00:01:32.176 + backend_dir=/var/lib/libvirt/images/backends 00:01:32.176 + nvme_files['nvme.img']=5G 00:01:32.176 + nvme_files['nvme-cmb.img']=5G 00:01:32.176 + nvme_files['nvme-multi0.img']=4G 00:01:32.176 + nvme_files['nvme-multi1.img']=4G 00:01:32.176 + nvme_files['nvme-multi2.img']=4G 00:01:32.176 + nvme_files['nvme-openstack.img']=8G 00:01:32.176 + nvme_files['nvme-zns.img']=5G 00:01:32.176 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:32.176 + (( SPDK_TEST_FTL == 1 )) 00:01:32.176 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:32.176 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:32.176 + for nvme in "${!nvme_files[@]}" 00:01:32.176 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:01:32.176 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:32.176 + for nvme in "${!nvme_files[@]}" 00:01:32.176 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:01:32.176 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:32.176 + for nvme in "${!nvme_files[@]}" 00:01:32.176 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:01:32.176 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:32.176 + for nvme in "${!nvme_files[@]}" 00:01:32.176 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:01:32.176 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:32.176 + for nvme in "${!nvme_files[@]}" 00:01:32.176 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:01:32.176 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:32.176 + for nvme in "${!nvme_files[@]}" 00:01:32.176 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:01:32.435 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:32.435 + for nvme in "${!nvme_files[@]}" 00:01:32.435 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:01:32.435 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:32.435 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:01:32.435 + echo 'End stage prepare_nvme.sh' 00:01:32.435 End stage prepare_nvme.sh 00:01:32.447 [Pipeline] sh 00:01:32.729 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:32.729 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora39 00:01:32.989 00:01:32.989 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:32.989 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:32.989 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:32.989 HELP=0 00:01:32.989 DRY_RUN=0 00:01:32.989 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:01:32.989 NVME_DISKS_TYPE=nvme,nvme, 00:01:32.989 NVME_AUTO_CREATE=0 00:01:32.989 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:01:32.989 NVME_CMB=,, 00:01:32.989 NVME_PMR=,, 00:01:32.989 NVME_ZNS=,, 00:01:32.989 NVME_MS=,, 00:01:32.989 NVME_FDP=,, 00:01:32.989 SPDK_VAGRANT_DISTRO=fedora39 00:01:32.989 SPDK_VAGRANT_VMCPU=10 00:01:32.989 SPDK_VAGRANT_VMRAM=12288 00:01:32.989 SPDK_VAGRANT_PROVIDER=libvirt 00:01:32.989 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:32.989 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:32.989 SPDK_OPENSTACK_NETWORK=0 00:01:32.989 VAGRANT_PACKAGE_BOX=0 00:01:32.989 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:32.989 FORCE_DISTRO=true 00:01:32.989 VAGRANT_BOX_VERSION= 00:01:32.989 EXTRA_VAGRANTFILES= 00:01:32.989 NIC_MODEL=e1000 00:01:32.989 00:01:32.989 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:01:32.989 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:36.298 Bringing machine 'default' up with 'libvirt' provider... 00:01:36.570 ==> default: Creating image (snapshot of base box volume). 00:01:36.829 ==> default: Creating domain with the following settings... 00:01:36.829 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732018901_287309d57f57fc5f54cd 00:01:36.829 ==> default: -- Domain type: kvm 00:01:36.829 ==> default: -- Cpus: 10 00:01:36.829 ==> default: -- Feature: acpi 00:01:36.829 ==> default: -- Feature: apic 00:01:36.829 ==> default: -- Feature: pae 00:01:36.829 ==> default: -- Memory: 12288M 00:01:36.829 ==> default: -- Memory Backing: hugepages: 00:01:36.829 ==> default: -- Management MAC: 00:01:36.829 ==> default: -- Loader: 00:01:36.829 ==> default: -- Nvram: 00:01:36.829 ==> default: -- Base box: spdk/fedora39 00:01:36.829 ==> default: -- Storage pool: default 00:01:36.829 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732018901_287309d57f57fc5f54cd.img (20G) 00:01:36.829 ==> default: -- Volume Cache: default 00:01:36.829 ==> default: -- Kernel: 00:01:36.829 ==> default: -- Initrd: 00:01:36.829 ==> default: -- Graphics Type: vnc 00:01:36.829 ==> default: -- Graphics Port: -1 00:01:36.829 ==> default: -- Graphics IP: 127.0.0.1 00:01:36.829 ==> default: -- Graphics Password: Not defined 00:01:36.829 ==> default: -- Video Type: cirrus 00:01:36.829 ==> default: -- Video VRAM: 9216 00:01:36.829 ==> default: -- Sound Type: 00:01:36.829 ==> default: -- Keymap: en-us 00:01:36.829 ==> default: -- TPM Path: 00:01:36.829 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:36.829 ==> default: -- Command line args: 00:01:36.829 ==> default: -> value=-device, 00:01:36.829 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:36.829 ==> default: -> value=-drive, 00:01:36.829 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:01:36.829 ==> default: -> value=-device, 00:01:36.829 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:36.829 ==> default: -> value=-device, 00:01:36.829 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:36.829 ==> default: -> value=-drive, 00:01:36.829 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:36.829 ==> default: -> value=-device, 00:01:36.829 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:36.829 ==> default: -> value=-drive, 00:01:36.829 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:36.829 ==> default: -> value=-device, 00:01:36.829 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:36.829 ==> default: -> value=-drive, 00:01:36.829 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:36.829 ==> default: -> value=-device, 00:01:36.829 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:36.829 ==> default: Creating shared folders metadata... 00:01:36.830 ==> default: Starting domain. 00:01:38.211 ==> default: Waiting for domain to get an IP address... 00:01:53.093 ==> default: Waiting for SSH to become available... 00:01:54.482 ==> default: Configuring and enabling network interfaces... 00:01:58.674 default: SSH address: 192.168.121.28:22 00:01:58.674 default: SSH username: vagrant 00:01:58.674 default: SSH auth method: private key 00:02:00.579 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:08.727 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:13.994 ==> default: Mounting SSHFS shared folder... 00:02:15.372 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:15.372 ==> default: Checking Mount.. 00:02:16.306 ==> default: Folder Successfully Mounted! 00:02:16.306 ==> default: Running provisioner: file... 00:02:17.243 default: ~/.gitconfig => .gitconfig 00:02:17.811 00:02:17.811 SUCCESS! 00:02:17.811 00:02:17.811 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:17.811 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:17.811 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:17.811 00:02:17.820 [Pipeline] } 00:02:17.835 [Pipeline] // stage 00:02:17.844 [Pipeline] dir 00:02:17.845 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:02:17.846 [Pipeline] { 00:02:17.859 [Pipeline] catchError 00:02:17.861 [Pipeline] { 00:02:17.873 [Pipeline] sh 00:02:18.154 + vagrant ssh-config --host vagrant 00:02:18.154 + sed -ne /^Host/,$p 00:02:18.154 + tee ssh_conf 00:02:21.443 Host vagrant 00:02:21.443 HostName 192.168.121.28 00:02:21.443 User vagrant 00:02:21.443 Port 22 00:02:21.443 UserKnownHostsFile /dev/null 00:02:21.443 StrictHostKeyChecking no 00:02:21.443 PasswordAuthentication no 00:02:21.443 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:21.443 IdentitiesOnly yes 00:02:21.443 LogLevel FATAL 00:02:21.443 ForwardAgent yes 00:02:21.443 ForwardX11 yes 00:02:21.443 00:02:21.457 [Pipeline] withEnv 00:02:21.459 [Pipeline] { 00:02:21.472 [Pipeline] sh 00:02:21.752 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:21.752 source /etc/os-release 00:02:21.752 [[ -e /image.version ]] && img=$(< /image.version) 00:02:21.752 # Minimal, systemd-like check. 00:02:21.752 if [[ -e /.dockerenv ]]; then 00:02:21.752 # Clear garbage from the node's name: 00:02:21.752 # agt-er_autotest_547-896 -> autotest_547-896 00:02:21.752 # $HOSTNAME is the actual container id 00:02:21.752 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:21.752 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:21.752 # We can assume this is a mount from a host where container is running, 00:02:21.752 # so fetch its hostname to easily identify the target swarm worker. 00:02:21.752 container="$(< /etc/hostname) ($agent)" 00:02:21.752 else 00:02:21.752 # Fallback 00:02:21.752 container=$agent 00:02:21.752 fi 00:02:21.752 fi 00:02:21.752 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:21.752 00:02:22.024 [Pipeline] } 00:02:22.040 [Pipeline] // withEnv 00:02:22.048 [Pipeline] setCustomBuildProperty 00:02:22.062 [Pipeline] stage 00:02:22.064 [Pipeline] { (Tests) 00:02:22.081 [Pipeline] sh 00:02:22.373 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:22.390 [Pipeline] sh 00:02:22.697 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:22.718 [Pipeline] timeout 00:02:22.719 Timeout set to expire in 1 hr 0 min 00:02:22.720 [Pipeline] { 00:02:22.735 [Pipeline] sh 00:02:23.019 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:23.585 HEAD is now at b18e1bd62 version: v24.09.1-pre 00:02:23.597 [Pipeline] sh 00:02:23.877 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:24.184 [Pipeline] sh 00:02:24.463 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:24.738 [Pipeline] sh 00:02:25.017 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:25.276 ++ readlink -f spdk_repo 00:02:25.276 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:25.276 + [[ -n /home/vagrant/spdk_repo ]] 00:02:25.276 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:25.276 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:25.276 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:25.276 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:25.276 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:25.276 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:25.276 + cd /home/vagrant/spdk_repo 00:02:25.276 + source /etc/os-release 00:02:25.276 ++ NAME='Fedora Linux' 00:02:25.276 ++ VERSION='39 (Cloud Edition)' 00:02:25.276 ++ ID=fedora 00:02:25.276 ++ VERSION_ID=39 00:02:25.276 ++ VERSION_CODENAME= 00:02:25.276 ++ PLATFORM_ID=platform:f39 00:02:25.276 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:25.276 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:25.276 ++ LOGO=fedora-logo-icon 00:02:25.276 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:25.276 ++ HOME_URL=https://fedoraproject.org/ 00:02:25.276 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:25.276 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:25.276 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:25.276 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:25.276 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:25.276 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:25.276 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:25.276 ++ SUPPORT_END=2024-11-12 00:02:25.276 ++ VARIANT='Cloud Edition' 00:02:25.276 ++ VARIANT_ID=cloud 00:02:25.276 + uname -a 00:02:25.276 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:25.276 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:25.535 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:25.535 Hugepages 00:02:25.535 node hugesize free / total 00:02:25.794 node0 1048576kB 0 / 0 00:02:25.794 node0 2048kB 0 / 0 00:02:25.794 00:02:25.794 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:25.794 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:25.794 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:25.794 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:25.794 + rm -f /tmp/spdk-ld-path 00:02:25.794 + source autorun-spdk.conf 00:02:25.794 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:25.794 ++ SPDK_TEST_NVMF=1 00:02:25.794 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:25.794 ++ SPDK_TEST_URING=1 00:02:25.794 ++ SPDK_TEST_VFIOUSER=1 00:02:25.794 ++ SPDK_TEST_USDT=1 00:02:25.794 ++ SPDK_RUN_UBSAN=1 00:02:25.794 ++ NET_TYPE=virt 00:02:25.794 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:25.794 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:25.794 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:25.794 ++ RUN_NIGHTLY=1 00:02:25.794 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:25.794 + [[ -n '' ]] 00:02:25.794 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:25.794 + for M in /var/spdk/build-*-manifest.txt 00:02:25.794 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:25.794 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:25.794 + for M in /var/spdk/build-*-manifest.txt 00:02:25.794 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:25.794 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:25.794 + for M in /var/spdk/build-*-manifest.txt 00:02:25.794 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:25.794 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:25.794 ++ uname 00:02:25.794 + [[ Linux == \L\i\n\u\x ]] 00:02:25.794 + sudo dmesg -T 00:02:25.794 + sudo dmesg --clear 00:02:25.794 + dmesg_pid=6003 00:02:25.794 + sudo dmesg -Tw 00:02:25.795 + [[ Fedora Linux == FreeBSD ]] 00:02:25.795 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:25.795 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:25.795 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:25.795 + [[ -x /usr/src/fio-static/fio ]] 00:02:25.795 + export FIO_BIN=/usr/src/fio-static/fio 00:02:25.795 + FIO_BIN=/usr/src/fio-static/fio 00:02:25.795 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:25.795 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:25.795 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:25.795 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:25.795 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:25.795 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:25.795 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:25.795 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:25.795 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:25.795 Test configuration: 00:02:25.795 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:25.795 SPDK_TEST_NVMF=1 00:02:25.795 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:25.795 SPDK_TEST_URING=1 00:02:25.795 SPDK_TEST_VFIOUSER=1 00:02:25.795 SPDK_TEST_USDT=1 00:02:25.795 SPDK_RUN_UBSAN=1 00:02:25.795 NET_TYPE=virt 00:02:25.795 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:25.795 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:25.795 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:26.054 RUN_NIGHTLY=1 12:22:31 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:02:26.054 12:22:31 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:26.054 12:22:31 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:26.054 12:22:31 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:26.054 12:22:31 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:26.054 12:22:31 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:26.054 12:22:31 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.054 12:22:31 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.054 12:22:31 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.054 12:22:31 -- paths/export.sh@5 -- $ export PATH 00:02:26.054 12:22:31 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.054 12:22:31 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:26.054 12:22:31 -- common/autobuild_common.sh@479 -- $ date +%s 00:02:26.054 12:22:31 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1732018951.XXXXXX 00:02:26.054 12:22:31 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1732018951.HHFLY2 00:02:26.054 12:22:31 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:02:26.054 12:22:31 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:02:26.054 12:22:31 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:26.054 12:22:31 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:26.054 12:22:31 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:26.054 12:22:31 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:26.054 12:22:31 -- common/autobuild_common.sh@495 -- $ get_config_params 00:02:26.054 12:22:31 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:26.054 12:22:31 -- common/autotest_common.sh@10 -- $ set +x 00:02:26.054 12:22:31 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:26.054 12:22:31 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:02:26.054 12:22:31 -- pm/common@17 -- $ local monitor 00:02:26.054 12:22:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.054 12:22:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.054 12:22:31 -- pm/common@25 -- $ sleep 1 00:02:26.054 12:22:31 -- pm/common@21 -- $ date +%s 00:02:26.054 12:22:31 -- pm/common@21 -- $ date +%s 00:02:26.054 12:22:31 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732018951 00:02:26.054 12:22:31 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732018951 00:02:26.054 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732018951_collect-vmstat.pm.log 00:02:26.054 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732018951_collect-cpu-load.pm.log 00:02:26.992 12:22:32 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:02:26.992 12:22:32 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:26.992 12:22:32 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:26.992 12:22:32 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:26.992 12:22:32 -- spdk/autobuild.sh@16 -- $ date -u 00:02:26.992 Tue Nov 19 12:22:32 PM UTC 2024 00:02:26.992 12:22:32 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:26.992 v24.09-1-gb18e1bd62 00:02:26.992 12:22:32 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:26.992 12:22:32 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:26.992 12:22:32 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:26.992 12:22:32 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:26.992 12:22:32 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:26.992 12:22:32 -- common/autotest_common.sh@10 -- $ set +x 00:02:26.992 ************************************ 00:02:26.992 START TEST ubsan 00:02:26.992 ************************************ 00:02:26.992 using ubsan 00:02:26.992 12:22:32 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:26.992 00:02:26.992 real 0m0.000s 00:02:26.992 user 0m0.000s 00:02:26.992 sys 0m0.000s 00:02:26.992 12:22:32 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:26.992 12:22:32 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:26.992 ************************************ 00:02:26.992 END TEST ubsan 00:02:26.992 ************************************ 00:02:26.992 12:22:32 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:26.992 12:22:32 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:26.992 12:22:32 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:26.992 12:22:32 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:02:26.992 12:22:32 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:26.992 12:22:32 -- common/autotest_common.sh@10 -- $ set +x 00:02:26.992 ************************************ 00:02:26.992 START TEST build_native_dpdk 00:02:26.992 ************************************ 00:02:26.992 12:22:32 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:02:26.992 12:22:32 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:26.992 12:22:32 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:26.992 12:22:32 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:26.992 12:22:32 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:26.992 12:22:32 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:26.992 12:22:32 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:26.992 12:22:32 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:26.992 12:22:32 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:26.992 12:22:32 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:26.992 12:22:32 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:26.992 12:22:32 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:27.252 12:22:32 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:27.252 12:22:32 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:27.252 12:22:32 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:27.252 12:22:32 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:27.252 12:22:32 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:27.252 12:22:32 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:27.252 12:22:32 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:27.252 12:22:32 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:27.252 12:22:32 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:27.252 eeb0605f11 version: 23.11.0 00:02:27.252 238778122a doc: update release notes for 23.11 00:02:27.252 46aa6b3cfc doc: fix description of RSS features 00:02:27.252 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:27.252 7e421ae345 devtools: support skipping forbid rule check 00:02:27.252 12:22:32 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:27.252 12:22:32 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:27.252 12:22:32 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:27.252 12:22:32 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:27.252 12:22:32 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:27.252 12:22:32 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:27.252 12:22:32 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:27.252 12:22:32 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:27.252 12:22:32 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:27.252 12:22:32 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:27.252 12:22:32 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:27.252 12:22:32 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:27.252 12:22:32 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:27.252 12:22:32 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:27.252 12:22:32 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:27.252 12:22:32 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:27.252 12:22:32 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:27.252 12:22:32 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:27.252 12:22:32 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:27.252 12:22:32 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:27.252 12:22:32 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:27.252 12:22:32 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:27.252 12:22:32 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:27.252 12:22:32 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:27.252 12:22:32 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:27.252 12:22:32 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:27.252 12:22:32 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:27.252 12:22:32 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:27.252 12:22:32 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:27.252 12:22:32 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:27.252 12:22:32 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:27.252 12:22:32 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:27.252 12:22:32 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:27.252 12:22:32 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:27.252 12:22:32 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:27.252 12:22:32 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:27.252 12:22:32 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:27.252 12:22:32 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:27.252 12:22:32 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:27.252 12:22:32 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:27.252 12:22:32 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:27.252 12:22:32 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:27.252 12:22:32 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:27.252 12:22:32 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:27.252 12:22:32 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:27.252 12:22:32 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:27.252 patching file config/rte_config.h 00:02:27.252 Hunk #1 succeeded at 60 (offset 1 line). 00:02:27.252 12:22:32 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:02:27.252 12:22:32 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:27.252 12:22:32 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:27.252 12:22:32 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:27.252 12:22:32 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:27.252 12:22:32 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:27.252 12:22:32 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:27.252 12:22:32 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:27.252 12:22:32 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:27.253 12:22:32 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:27.253 12:22:32 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:27.253 12:22:32 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:27.253 12:22:32 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:27.253 12:22:32 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:27.253 12:22:32 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:27.253 12:22:32 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:27.253 12:22:32 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:27.253 12:22:32 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:27.253 12:22:32 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:27.253 12:22:32 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:27.253 12:22:32 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:27.253 12:22:32 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:27.253 12:22:32 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:27.253 12:22:32 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:27.253 12:22:32 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:27.253 12:22:32 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:27.253 12:22:32 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:27.253 12:22:32 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:27.253 12:22:32 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:27.253 12:22:32 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:27.253 patching file lib/pcapng/rte_pcapng.c 00:02:27.253 12:22:32 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 23.11.0 24.07.0 00:02:27.253 12:22:32 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:02:27.253 12:22:32 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:27.253 12:22:32 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:27.253 12:22:32 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:27.253 12:22:32 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:27.253 12:22:32 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:27.253 12:22:32 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:27.253 12:22:32 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:27.253 12:22:32 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:27.253 12:22:32 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:27.253 12:22:32 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:27.253 12:22:32 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:27.253 12:22:32 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:27.253 12:22:32 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:27.253 12:22:32 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:27.253 12:22:32 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:27.253 12:22:32 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:27.253 12:22:32 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:27.253 12:22:32 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:27.253 12:22:32 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:27.253 12:22:32 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:27.253 12:22:32 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:27.253 12:22:32 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:27.253 12:22:32 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:27.253 12:22:32 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:27.253 12:22:32 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:27.253 12:22:32 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:27.253 12:22:32 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:27.253 12:22:32 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:27.253 12:22:32 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:27.253 12:22:32 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:27.253 12:22:32 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:27.253 12:22:32 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:32.549 The Meson build system 00:02:32.549 Version: 1.5.0 00:02:32.549 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:32.549 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:32.549 Build type: native build 00:02:32.549 Program cat found: YES (/usr/bin/cat) 00:02:32.549 Project name: DPDK 00:02:32.549 Project version: 23.11.0 00:02:32.549 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:32.549 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:32.549 Host machine cpu family: x86_64 00:02:32.549 Host machine cpu: x86_64 00:02:32.549 Message: ## Building in Developer Mode ## 00:02:32.549 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:32.549 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:32.549 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:32.549 Program python3 found: YES (/usr/bin/python3) 00:02:32.549 Program cat found: YES (/usr/bin/cat) 00:02:32.549 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:32.549 Compiler for C supports arguments -march=native: YES 00:02:32.549 Checking for size of "void *" : 8 00:02:32.549 Checking for size of "void *" : 8 (cached) 00:02:32.549 Library m found: YES 00:02:32.549 Library numa found: YES 00:02:32.549 Has header "numaif.h" : YES 00:02:32.549 Library fdt found: NO 00:02:32.549 Library execinfo found: NO 00:02:32.549 Has header "execinfo.h" : YES 00:02:32.549 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:32.549 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:32.549 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:32.549 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:32.549 Run-time dependency openssl found: YES 3.1.1 00:02:32.549 Run-time dependency libpcap found: YES 1.10.4 00:02:32.549 Has header "pcap.h" with dependency libpcap: YES 00:02:32.549 Compiler for C supports arguments -Wcast-qual: YES 00:02:32.549 Compiler for C supports arguments -Wdeprecated: YES 00:02:32.549 Compiler for C supports arguments -Wformat: YES 00:02:32.549 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:32.549 Compiler for C supports arguments -Wformat-security: NO 00:02:32.549 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:32.549 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:32.549 Compiler for C supports arguments -Wnested-externs: YES 00:02:32.549 Compiler for C supports arguments -Wold-style-definition: YES 00:02:32.549 Compiler for C supports arguments -Wpointer-arith: YES 00:02:32.549 Compiler for C supports arguments -Wsign-compare: YES 00:02:32.549 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:32.549 Compiler for C supports arguments -Wundef: YES 00:02:32.549 Compiler for C supports arguments -Wwrite-strings: YES 00:02:32.549 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:32.549 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:32.549 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:32.549 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:32.549 Program objdump found: YES (/usr/bin/objdump) 00:02:32.549 Compiler for C supports arguments -mavx512f: YES 00:02:32.549 Checking if "AVX512 checking" compiles: YES 00:02:32.549 Fetching value of define "__SSE4_2__" : 1 00:02:32.549 Fetching value of define "__AES__" : 1 00:02:32.549 Fetching value of define "__AVX__" : 1 00:02:32.549 Fetching value of define "__AVX2__" : 1 00:02:32.549 Fetching value of define "__AVX512BW__" : (undefined) 00:02:32.549 Fetching value of define "__AVX512CD__" : (undefined) 00:02:32.549 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:32.549 Fetching value of define "__AVX512F__" : (undefined) 00:02:32.549 Fetching value of define "__AVX512VL__" : (undefined) 00:02:32.549 Fetching value of define "__PCLMUL__" : 1 00:02:32.549 Fetching value of define "__RDRND__" : 1 00:02:32.549 Fetching value of define "__RDSEED__" : 1 00:02:32.549 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:32.549 Fetching value of define "__znver1__" : (undefined) 00:02:32.549 Fetching value of define "__znver2__" : (undefined) 00:02:32.549 Fetching value of define "__znver3__" : (undefined) 00:02:32.549 Fetching value of define "__znver4__" : (undefined) 00:02:32.549 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:32.549 Message: lib/log: Defining dependency "log" 00:02:32.549 Message: lib/kvargs: Defining dependency "kvargs" 00:02:32.549 Message: lib/telemetry: Defining dependency "telemetry" 00:02:32.549 Checking for function "getentropy" : NO 00:02:32.549 Message: lib/eal: Defining dependency "eal" 00:02:32.549 Message: lib/ring: Defining dependency "ring" 00:02:32.549 Message: lib/rcu: Defining dependency "rcu" 00:02:32.549 Message: lib/mempool: Defining dependency "mempool" 00:02:32.549 Message: lib/mbuf: Defining dependency "mbuf" 00:02:32.549 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:32.549 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:32.549 Compiler for C supports arguments -mpclmul: YES 00:02:32.549 Compiler for C supports arguments -maes: YES 00:02:32.549 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:32.549 Compiler for C supports arguments -mavx512bw: YES 00:02:32.549 Compiler for C supports arguments -mavx512dq: YES 00:02:32.549 Compiler for C supports arguments -mavx512vl: YES 00:02:32.549 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:32.549 Compiler for C supports arguments -mavx2: YES 00:02:32.549 Compiler for C supports arguments -mavx: YES 00:02:32.549 Message: lib/net: Defining dependency "net" 00:02:32.549 Message: lib/meter: Defining dependency "meter" 00:02:32.549 Message: lib/ethdev: Defining dependency "ethdev" 00:02:32.549 Message: lib/pci: Defining dependency "pci" 00:02:32.549 Message: lib/cmdline: Defining dependency "cmdline" 00:02:32.549 Message: lib/metrics: Defining dependency "metrics" 00:02:32.549 Message: lib/hash: Defining dependency "hash" 00:02:32.549 Message: lib/timer: Defining dependency "timer" 00:02:32.549 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:32.549 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:32.549 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:32.549 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:32.549 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:32.549 Message: lib/acl: Defining dependency "acl" 00:02:32.549 Message: lib/bbdev: Defining dependency "bbdev" 00:02:32.549 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:32.549 Run-time dependency libelf found: YES 0.191 00:02:32.549 Message: lib/bpf: Defining dependency "bpf" 00:02:32.549 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:32.549 Message: lib/compressdev: Defining dependency "compressdev" 00:02:32.549 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:32.549 Message: lib/distributor: Defining dependency "distributor" 00:02:32.549 Message: lib/dmadev: Defining dependency "dmadev" 00:02:32.549 Message: lib/efd: Defining dependency "efd" 00:02:32.549 Message: lib/eventdev: Defining dependency "eventdev" 00:02:32.549 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:32.549 Message: lib/gpudev: Defining dependency "gpudev" 00:02:32.549 Message: lib/gro: Defining dependency "gro" 00:02:32.549 Message: lib/gso: Defining dependency "gso" 00:02:32.549 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:32.549 Message: lib/jobstats: Defining dependency "jobstats" 00:02:32.549 Message: lib/latencystats: Defining dependency "latencystats" 00:02:32.549 Message: lib/lpm: Defining dependency "lpm" 00:02:32.549 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:32.549 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:32.549 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:32.549 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:32.549 Message: lib/member: Defining dependency "member" 00:02:32.549 Message: lib/pcapng: Defining dependency "pcapng" 00:02:32.549 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:32.549 Message: lib/power: Defining dependency "power" 00:02:32.549 Message: lib/rawdev: Defining dependency "rawdev" 00:02:32.549 Message: lib/regexdev: Defining dependency "regexdev" 00:02:32.549 Message: lib/mldev: Defining dependency "mldev" 00:02:32.549 Message: lib/rib: Defining dependency "rib" 00:02:32.549 Message: lib/reorder: Defining dependency "reorder" 00:02:32.549 Message: lib/sched: Defining dependency "sched" 00:02:32.549 Message: lib/security: Defining dependency "security" 00:02:32.549 Message: lib/stack: Defining dependency "stack" 00:02:32.549 Has header "linux/userfaultfd.h" : YES 00:02:32.549 Has header "linux/vduse.h" : YES 00:02:32.549 Message: lib/vhost: Defining dependency "vhost" 00:02:32.549 Message: lib/ipsec: Defining dependency "ipsec" 00:02:32.549 Message: lib/pdcp: Defining dependency "pdcp" 00:02:32.549 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:32.549 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:32.549 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:32.549 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:32.550 Message: lib/fib: Defining dependency "fib" 00:02:32.550 Message: lib/port: Defining dependency "port" 00:02:32.550 Message: lib/pdump: Defining dependency "pdump" 00:02:32.550 Message: lib/table: Defining dependency "table" 00:02:32.550 Message: lib/pipeline: Defining dependency "pipeline" 00:02:32.550 Message: lib/graph: Defining dependency "graph" 00:02:32.550 Message: lib/node: Defining dependency "node" 00:02:32.550 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:34.456 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:34.456 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:34.456 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:34.456 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:34.456 Compiler for C supports arguments -Wno-unused-value: YES 00:02:34.456 Compiler for C supports arguments -Wno-format: YES 00:02:34.456 Compiler for C supports arguments -Wno-format-security: YES 00:02:34.456 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:34.456 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:34.456 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:34.456 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:34.456 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:34.456 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:34.456 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:34.456 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:34.456 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:34.456 Has header "sys/epoll.h" : YES 00:02:34.456 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:34.456 Configuring doxy-api-html.conf using configuration 00:02:34.456 Configuring doxy-api-man.conf using configuration 00:02:34.456 Program mandb found: YES (/usr/bin/mandb) 00:02:34.456 Program sphinx-build found: NO 00:02:34.456 Configuring rte_build_config.h using configuration 00:02:34.456 Message: 00:02:34.456 ================= 00:02:34.456 Applications Enabled 00:02:34.456 ================= 00:02:34.456 00:02:34.456 apps: 00:02:34.456 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:34.456 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:34.456 test-pmd, test-regex, test-sad, test-security-perf, 00:02:34.456 00:02:34.456 Message: 00:02:34.456 ================= 00:02:34.456 Libraries Enabled 00:02:34.456 ================= 00:02:34.456 00:02:34.456 libs: 00:02:34.457 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:34.457 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:34.457 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:34.457 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:34.457 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:34.457 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:34.457 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:34.457 00:02:34.457 00:02:34.457 Message: 00:02:34.457 =============== 00:02:34.457 Drivers Enabled 00:02:34.457 =============== 00:02:34.457 00:02:34.457 common: 00:02:34.457 00:02:34.457 bus: 00:02:34.457 pci, vdev, 00:02:34.457 mempool: 00:02:34.457 ring, 00:02:34.457 dma: 00:02:34.457 00:02:34.457 net: 00:02:34.457 i40e, 00:02:34.457 raw: 00:02:34.457 00:02:34.457 crypto: 00:02:34.457 00:02:34.457 compress: 00:02:34.457 00:02:34.457 regex: 00:02:34.457 00:02:34.457 ml: 00:02:34.457 00:02:34.457 vdpa: 00:02:34.457 00:02:34.457 event: 00:02:34.457 00:02:34.457 baseband: 00:02:34.457 00:02:34.457 gpu: 00:02:34.457 00:02:34.457 00:02:34.457 Message: 00:02:34.457 ================= 00:02:34.457 Content Skipped 00:02:34.457 ================= 00:02:34.457 00:02:34.457 apps: 00:02:34.457 00:02:34.457 libs: 00:02:34.457 00:02:34.457 drivers: 00:02:34.457 common/cpt: not in enabled drivers build config 00:02:34.457 common/dpaax: not in enabled drivers build config 00:02:34.457 common/iavf: not in enabled drivers build config 00:02:34.457 common/idpf: not in enabled drivers build config 00:02:34.457 common/mvep: not in enabled drivers build config 00:02:34.457 common/octeontx: not in enabled drivers build config 00:02:34.457 bus/auxiliary: not in enabled drivers build config 00:02:34.457 bus/cdx: not in enabled drivers build config 00:02:34.457 bus/dpaa: not in enabled drivers build config 00:02:34.457 bus/fslmc: not in enabled drivers build config 00:02:34.457 bus/ifpga: not in enabled drivers build config 00:02:34.457 bus/platform: not in enabled drivers build config 00:02:34.457 bus/vmbus: not in enabled drivers build config 00:02:34.457 common/cnxk: not in enabled drivers build config 00:02:34.457 common/mlx5: not in enabled drivers build config 00:02:34.457 common/nfp: not in enabled drivers build config 00:02:34.457 common/qat: not in enabled drivers build config 00:02:34.457 common/sfc_efx: not in enabled drivers build config 00:02:34.457 mempool/bucket: not in enabled drivers build config 00:02:34.457 mempool/cnxk: not in enabled drivers build config 00:02:34.457 mempool/dpaa: not in enabled drivers build config 00:02:34.457 mempool/dpaa2: not in enabled drivers build config 00:02:34.457 mempool/octeontx: not in enabled drivers build config 00:02:34.457 mempool/stack: not in enabled drivers build config 00:02:34.457 dma/cnxk: not in enabled drivers build config 00:02:34.457 dma/dpaa: not in enabled drivers build config 00:02:34.457 dma/dpaa2: not in enabled drivers build config 00:02:34.457 dma/hisilicon: not in enabled drivers build config 00:02:34.457 dma/idxd: not in enabled drivers build config 00:02:34.457 dma/ioat: not in enabled drivers build config 00:02:34.457 dma/skeleton: not in enabled drivers build config 00:02:34.457 net/af_packet: not in enabled drivers build config 00:02:34.457 net/af_xdp: not in enabled drivers build config 00:02:34.457 net/ark: not in enabled drivers build config 00:02:34.457 net/atlantic: not in enabled drivers build config 00:02:34.457 net/avp: not in enabled drivers build config 00:02:34.457 net/axgbe: not in enabled drivers build config 00:02:34.457 net/bnx2x: not in enabled drivers build config 00:02:34.457 net/bnxt: not in enabled drivers build config 00:02:34.457 net/bonding: not in enabled drivers build config 00:02:34.457 net/cnxk: not in enabled drivers build config 00:02:34.457 net/cpfl: not in enabled drivers build config 00:02:34.457 net/cxgbe: not in enabled drivers build config 00:02:34.457 net/dpaa: not in enabled drivers build config 00:02:34.457 net/dpaa2: not in enabled drivers build config 00:02:34.457 net/e1000: not in enabled drivers build config 00:02:34.457 net/ena: not in enabled drivers build config 00:02:34.457 net/enetc: not in enabled drivers build config 00:02:34.457 net/enetfec: not in enabled drivers build config 00:02:34.457 net/enic: not in enabled drivers build config 00:02:34.457 net/failsafe: not in enabled drivers build config 00:02:34.457 net/fm10k: not in enabled drivers build config 00:02:34.457 net/gve: not in enabled drivers build config 00:02:34.457 net/hinic: not in enabled drivers build config 00:02:34.457 net/hns3: not in enabled drivers build config 00:02:34.457 net/iavf: not in enabled drivers build config 00:02:34.457 net/ice: not in enabled drivers build config 00:02:34.457 net/idpf: not in enabled drivers build config 00:02:34.457 net/igc: not in enabled drivers build config 00:02:34.457 net/ionic: not in enabled drivers build config 00:02:34.457 net/ipn3ke: not in enabled drivers build config 00:02:34.457 net/ixgbe: not in enabled drivers build config 00:02:34.457 net/mana: not in enabled drivers build config 00:02:34.457 net/memif: not in enabled drivers build config 00:02:34.457 net/mlx4: not in enabled drivers build config 00:02:34.457 net/mlx5: not in enabled drivers build config 00:02:34.457 net/mvneta: not in enabled drivers build config 00:02:34.457 net/mvpp2: not in enabled drivers build config 00:02:34.457 net/netvsc: not in enabled drivers build config 00:02:34.457 net/nfb: not in enabled drivers build config 00:02:34.457 net/nfp: not in enabled drivers build config 00:02:34.457 net/ngbe: not in enabled drivers build config 00:02:34.457 net/null: not in enabled drivers build config 00:02:34.457 net/octeontx: not in enabled drivers build config 00:02:34.457 net/octeon_ep: not in enabled drivers build config 00:02:34.457 net/pcap: not in enabled drivers build config 00:02:34.457 net/pfe: not in enabled drivers build config 00:02:34.457 net/qede: not in enabled drivers build config 00:02:34.457 net/ring: not in enabled drivers build config 00:02:34.457 net/sfc: not in enabled drivers build config 00:02:34.457 net/softnic: not in enabled drivers build config 00:02:34.457 net/tap: not in enabled drivers build config 00:02:34.457 net/thunderx: not in enabled drivers build config 00:02:34.457 net/txgbe: not in enabled drivers build config 00:02:34.457 net/vdev_netvsc: not in enabled drivers build config 00:02:34.457 net/vhost: not in enabled drivers build config 00:02:34.457 net/virtio: not in enabled drivers build config 00:02:34.457 net/vmxnet3: not in enabled drivers build config 00:02:34.457 raw/cnxk_bphy: not in enabled drivers build config 00:02:34.457 raw/cnxk_gpio: not in enabled drivers build config 00:02:34.457 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:34.457 raw/ifpga: not in enabled drivers build config 00:02:34.457 raw/ntb: not in enabled drivers build config 00:02:34.457 raw/skeleton: not in enabled drivers build config 00:02:34.457 crypto/armv8: not in enabled drivers build config 00:02:34.457 crypto/bcmfs: not in enabled drivers build config 00:02:34.457 crypto/caam_jr: not in enabled drivers build config 00:02:34.457 crypto/ccp: not in enabled drivers build config 00:02:34.457 crypto/cnxk: not in enabled drivers build config 00:02:34.457 crypto/dpaa_sec: not in enabled drivers build config 00:02:34.457 crypto/dpaa2_sec: not in enabled drivers build config 00:02:34.457 crypto/ipsec_mb: not in enabled drivers build config 00:02:34.457 crypto/mlx5: not in enabled drivers build config 00:02:34.457 crypto/mvsam: not in enabled drivers build config 00:02:34.457 crypto/nitrox: not in enabled drivers build config 00:02:34.457 crypto/null: not in enabled drivers build config 00:02:34.457 crypto/octeontx: not in enabled drivers build config 00:02:34.457 crypto/openssl: not in enabled drivers build config 00:02:34.457 crypto/scheduler: not in enabled drivers build config 00:02:34.457 crypto/uadk: not in enabled drivers build config 00:02:34.457 crypto/virtio: not in enabled drivers build config 00:02:34.457 compress/isal: not in enabled drivers build config 00:02:34.457 compress/mlx5: not in enabled drivers build config 00:02:34.457 compress/octeontx: not in enabled drivers build config 00:02:34.457 compress/zlib: not in enabled drivers build config 00:02:34.457 regex/mlx5: not in enabled drivers build config 00:02:34.457 regex/cn9k: not in enabled drivers build config 00:02:34.457 ml/cnxk: not in enabled drivers build config 00:02:34.457 vdpa/ifc: not in enabled drivers build config 00:02:34.457 vdpa/mlx5: not in enabled drivers build config 00:02:34.457 vdpa/nfp: not in enabled drivers build config 00:02:34.457 vdpa/sfc: not in enabled drivers build config 00:02:34.457 event/cnxk: not in enabled drivers build config 00:02:34.457 event/dlb2: not in enabled drivers build config 00:02:34.457 event/dpaa: not in enabled drivers build config 00:02:34.457 event/dpaa2: not in enabled drivers build config 00:02:34.457 event/dsw: not in enabled drivers build config 00:02:34.457 event/opdl: not in enabled drivers build config 00:02:34.457 event/skeleton: not in enabled drivers build config 00:02:34.457 event/sw: not in enabled drivers build config 00:02:34.457 event/octeontx: not in enabled drivers build config 00:02:34.457 baseband/acc: not in enabled drivers build config 00:02:34.457 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:34.457 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:34.457 baseband/la12xx: not in enabled drivers build config 00:02:34.457 baseband/null: not in enabled drivers build config 00:02:34.457 baseband/turbo_sw: not in enabled drivers build config 00:02:34.457 gpu/cuda: not in enabled drivers build config 00:02:34.457 00:02:34.457 00:02:34.457 Build targets in project: 220 00:02:34.457 00:02:34.457 DPDK 23.11.0 00:02:34.457 00:02:34.457 User defined options 00:02:34.457 libdir : lib 00:02:34.457 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:34.457 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:34.457 c_link_args : 00:02:34.457 enable_docs : false 00:02:34.457 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:34.457 enable_kmods : false 00:02:34.457 machine : native 00:02:34.457 tests : false 00:02:34.457 00:02:34.457 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:34.457 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:34.457 12:22:39 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:34.457 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:34.716 [1/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:34.716 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:34.716 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:34.716 [4/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:34.716 [5/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:34.716 [6/710] Linking static target lib/librte_kvargs.a 00:02:34.716 [7/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:34.716 [8/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:34.975 [9/710] Linking static target lib/librte_log.a 00:02:34.975 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:34.975 [11/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.234 [12/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:35.234 [13/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:35.234 [14/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:35.234 [15/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.234 [16/710] Linking target lib/librte_log.so.24.0 00:02:35.234 [17/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:35.493 [18/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:35.493 [19/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:35.493 [20/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:35.752 [21/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:35.752 [22/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:35.752 [23/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:35.752 [24/710] Linking target lib/librte_kvargs.so.24.0 00:02:35.752 [25/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:36.010 [26/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:36.010 [27/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:36.010 [28/710] Linking static target lib/librte_telemetry.a 00:02:36.010 [29/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:36.010 [30/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:36.010 [31/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:36.010 [32/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:36.269 [33/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:36.269 [34/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.269 [35/710] Linking target lib/librte_telemetry.so.24.0 00:02:36.527 [36/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:36.527 [37/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:36.527 [38/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:36.527 [39/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:36.527 [40/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:36.527 [41/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:36.527 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:36.527 [43/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:36.786 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:36.786 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:36.786 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:37.045 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:37.045 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:37.045 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:37.045 [50/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:37.303 [51/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:37.303 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:37.303 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:37.303 [54/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:37.562 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:37.562 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:37.562 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:37.562 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:37.562 [59/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:37.562 [60/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:37.562 [61/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:37.820 [62/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:37.820 [63/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:37.820 [64/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:37.820 [65/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:37.820 [66/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:38.078 [67/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:38.078 [68/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:38.337 [69/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:38.337 [70/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:38.337 [71/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:38.337 [72/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:38.337 [73/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:38.337 [74/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:38.337 [75/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:38.337 [76/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:38.337 [77/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:38.595 [78/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:38.595 [79/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:38.853 [80/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:38.853 [81/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:38.853 [82/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:39.112 [83/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:39.112 [84/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:39.112 [85/710] Linking static target lib/librte_ring.a 00:02:39.112 [86/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:39.370 [87/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:39.370 [88/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:39.370 [89/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.370 [90/710] Linking static target lib/librte_eal.a 00:02:39.628 [91/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:39.628 [92/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:39.628 [93/710] Linking static target lib/librte_mempool.a 00:02:39.628 [94/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:39.628 [95/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:39.886 [96/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:39.886 [97/710] Linking static target lib/librte_rcu.a 00:02:39.886 [98/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:39.886 [99/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:39.886 [100/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:40.145 [101/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:40.145 [102/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.145 [103/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.145 [104/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:40.403 [105/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:40.403 [106/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:40.403 [107/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:40.403 [108/710] Linking static target lib/librte_mbuf.a 00:02:40.403 [109/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:40.403 [110/710] Linking static target lib/librte_net.a 00:02:40.662 [111/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:40.662 [112/710] Linking static target lib/librte_meter.a 00:02:40.662 [113/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.921 [114/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:40.921 [115/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.921 [116/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:40.921 [117/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:40.921 [118/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:41.179 [119/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.746 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:41.746 [121/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:41.746 [122/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:42.004 [123/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:42.004 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:42.004 [125/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:42.004 [126/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:42.004 [127/710] Linking static target lib/librte_pci.a 00:02:42.262 [128/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:42.262 [129/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:42.262 [130/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.262 [131/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:42.262 [132/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:42.520 [133/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:42.520 [134/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:42.520 [135/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:42.520 [136/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:42.520 [137/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:42.520 [138/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:42.520 [139/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:42.520 [140/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:42.779 [141/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:42.779 [142/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:42.779 [143/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:43.037 [144/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:43.037 [145/710] Linking static target lib/librte_cmdline.a 00:02:43.037 [146/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:43.296 [147/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:43.296 [148/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:43.296 [149/710] Linking static target lib/librte_metrics.a 00:02:43.296 [150/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:43.554 [151/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.813 [152/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:43.813 [153/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.813 [154/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:43.813 [155/710] Linking static target lib/librte_timer.a 00:02:44.084 [156/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.666 [157/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:44.666 [158/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:44.666 [159/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:44.666 [160/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:45.232 [161/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:45.232 [162/710] Linking static target lib/librte_ethdev.a 00:02:45.232 [163/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:45.491 [164/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:45.491 [165/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:45.491 [166/710] Linking static target lib/librte_bitratestats.a 00:02:45.491 [167/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:45.491 [168/710] Linking static target lib/librte_bbdev.a 00:02:45.749 [169/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.749 [170/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:45.749 [171/710] Linking static target lib/librte_hash.a 00:02:45.749 [172/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.007 [173/710] Linking target lib/librte_eal.so.24.0 00:02:46.007 [174/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:46.007 [175/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:46.007 [176/710] Linking static target lib/acl/libavx2_tmp.a 00:02:46.007 [177/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:46.007 [178/710] Linking target lib/librte_ring.so.24.0 00:02:46.266 [179/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.266 [180/710] Linking target lib/librte_meter.so.24.0 00:02:46.266 [181/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:46.266 [182/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.266 [183/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:46.266 [184/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:46.266 [185/710] Linking target lib/librte_rcu.so.24.0 00:02:46.266 [186/710] Linking target lib/librte_mempool.so.24.0 00:02:46.266 [187/710] Linking target lib/librte_pci.so.24.0 00:02:46.266 [188/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:46.266 [189/710] Linking target lib/librte_timer.so.24.0 00:02:46.266 [190/710] Linking static target lib/acl/libavx512_tmp.a 00:02:46.266 [191/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:46.536 [192/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:46.536 [193/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:46.536 [194/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:46.536 [195/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:46.536 [196/710] Linking target lib/librte_mbuf.so.24.0 00:02:46.536 [197/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:46.536 [198/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:46.536 [199/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:46.536 [200/710] Linking static target lib/librte_acl.a 00:02:46.536 [201/710] Linking target lib/librte_net.so.24.0 00:02:46.806 [202/710] Linking target lib/librte_bbdev.so.24.0 00:02:46.806 [203/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:46.806 [204/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:46.806 [205/710] Linking static target lib/librte_cfgfile.a 00:02:46.806 [206/710] Linking target lib/librte_cmdline.so.24.0 00:02:46.806 [207/710] Linking target lib/librte_hash.so.24.0 00:02:47.064 [208/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.064 [209/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:47.064 [210/710] Linking target lib/librte_acl.so.24.0 00:02:47.064 [211/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:47.064 [212/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:47.064 [213/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:47.064 [214/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.323 [215/710] Linking target lib/librte_cfgfile.so.24.0 00:02:47.323 [216/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:47.581 [217/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:47.581 [218/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:47.581 [219/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:47.840 [220/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:47.840 [221/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:47.840 [222/710] Linking static target lib/librte_bpf.a 00:02:47.840 [223/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:47.840 [224/710] Linking static target lib/librte_compressdev.a 00:02:48.098 [225/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:48.098 [226/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.098 [227/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:48.357 [228/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:48.357 [229/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:48.357 [230/710] Linking static target lib/librte_distributor.a 00:02:48.357 [231/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.357 [232/710] Linking target lib/librte_compressdev.so.24.0 00:02:48.357 [233/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:48.615 [234/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.615 [235/710] Linking target lib/librte_distributor.so.24.0 00:02:48.615 [236/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:48.615 [237/710] Linking static target lib/librte_dmadev.a 00:02:48.874 [238/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:49.132 [239/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.132 [240/710] Linking target lib/librte_dmadev.so.24.0 00:02:49.132 [241/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:49.132 [242/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:49.390 [243/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:49.649 [244/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:49.649 [245/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:49.649 [246/710] Linking static target lib/librte_efd.a 00:02:49.907 [247/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:49.907 [248/710] Linking static target lib/librte_cryptodev.a 00:02:49.907 [249/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.907 [250/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:49.907 [251/710] Linking target lib/librte_efd.so.24.0 00:02:50.166 [252/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.166 [253/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:50.425 [254/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:50.425 [255/710] Linking static target lib/librte_dispatcher.a 00:02:50.425 [256/710] Linking target lib/librte_ethdev.so.24.0 00:02:50.425 [257/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:50.425 [258/710] Linking target lib/librte_metrics.so.24.0 00:02:50.683 [259/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:50.683 [260/710] Linking target lib/librte_bpf.so.24.0 00:02:50.683 [261/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:50.683 [262/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:50.683 [263/710] Linking static target lib/librte_gpudev.a 00:02:50.683 [264/710] Linking target lib/librte_bitratestats.so.24.0 00:02:50.683 [265/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:50.683 [266/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.941 [267/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:50.941 [268/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:51.199 [269/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.199 [270/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:51.199 [271/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:51.199 [272/710] Linking target lib/librte_cryptodev.so.24.0 00:02:51.199 [273/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:51.457 [274/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:51.457 [275/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.457 [276/710] Linking target lib/librte_gpudev.so.24.0 00:02:51.457 [277/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:51.457 [278/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:51.457 [279/710] Linking static target lib/librte_eventdev.a 00:02:51.715 [280/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:51.715 [281/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:51.715 [282/710] Linking static target lib/librte_gro.a 00:02:51.715 [283/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:51.715 [284/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:51.715 [285/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:51.974 [286/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.974 [287/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:51.974 [288/710] Linking target lib/librte_gro.so.24.0 00:02:52.232 [289/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:52.232 [290/710] Linking static target lib/librte_gso.a 00:02:52.232 [291/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:52.232 [292/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.491 [293/710] Linking target lib/librte_gso.so.24.0 00:02:52.491 [294/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:52.491 [295/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:52.491 [296/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:52.491 [297/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:52.491 [298/710] Linking static target lib/librte_jobstats.a 00:02:52.491 [299/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:52.748 [300/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:52.748 [301/710] Linking static target lib/librte_ip_frag.a 00:02:52.748 [302/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:52.748 [303/710] Linking static target lib/librte_latencystats.a 00:02:52.748 [304/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.006 [305/710] Linking target lib/librte_jobstats.so.24.0 00:02:53.006 [306/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.006 [307/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.006 [308/710] Linking target lib/librte_ip_frag.so.24.0 00:02:53.006 [309/710] Linking target lib/librte_latencystats.so.24.0 00:02:53.006 [310/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:53.264 [311/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:53.264 [312/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:53.264 [313/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:53.264 [314/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:53.264 [315/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:53.264 [316/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:53.264 [317/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:53.522 [318/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.522 [319/710] Linking target lib/librte_eventdev.so.24.0 00:02:53.780 [320/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:53.780 [321/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:53.780 [322/710] Linking static target lib/librte_lpm.a 00:02:53.780 [323/710] Linking target lib/librte_dispatcher.so.24.0 00:02:54.038 [324/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:54.038 [325/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:54.038 [326/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:54.038 [327/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:54.038 [328/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:54.038 [329/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:54.038 [330/710] Linking static target lib/librte_pcapng.a 00:02:54.038 [331/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:54.038 [332/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.296 [333/710] Linking target lib/librte_lpm.so.24.0 00:02:54.296 [334/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:54.296 [335/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.296 [336/710] Linking target lib/librte_pcapng.so.24.0 00:02:54.554 [337/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:54.554 [338/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:54.554 [339/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:54.812 [340/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:54.812 [341/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:54.812 [342/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:54.812 [343/710] Linking static target lib/librte_power.a 00:02:54.813 [344/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:54.813 [345/710] Linking static target lib/librte_regexdev.a 00:02:55.070 [346/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:55.070 [347/710] Linking static target lib/librte_rawdev.a 00:02:55.070 [348/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:55.070 [349/710] Linking static target lib/librte_member.a 00:02:55.070 [350/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:55.070 [351/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:55.070 [352/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:55.328 [353/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.328 [354/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:55.328 [355/710] Linking static target lib/librte_mldev.a 00:02:55.328 [356/710] Linking target lib/librte_member.so.24.0 00:02:55.329 [357/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.329 [358/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.329 [359/710] Linking target lib/librte_rawdev.so.24.0 00:02:55.587 [360/710] Linking target lib/librte_power.so.24.0 00:02:55.587 [361/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:55.587 [362/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:55.587 [363/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.587 [364/710] Linking target lib/librte_regexdev.so.24.0 00:02:55.847 [365/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:55.847 [366/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:55.847 [367/710] Linking static target lib/librte_rib.a 00:02:55.847 [368/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:56.106 [369/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:56.106 [370/710] Linking static target lib/librte_reorder.a 00:02:56.106 [371/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:56.106 [372/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:56.106 [373/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:56.365 [374/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:56.365 [375/710] Linking static target lib/librte_stack.a 00:02:56.365 [376/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.365 [377/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:56.365 [378/710] Linking static target lib/librte_security.a 00:02:56.365 [379/710] Linking target lib/librte_reorder.so.24.0 00:02:56.365 [380/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.365 [381/710] Linking target lib/librte_rib.so.24.0 00:02:56.365 [382/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.624 [383/710] Linking target lib/librte_stack.so.24.0 00:02:56.624 [384/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:56.624 [385/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:56.624 [386/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.624 [387/710] Linking target lib/librte_mldev.so.24.0 00:02:56.882 [388/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.882 [389/710] Linking target lib/librte_security.so.24.0 00:02:56.882 [390/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:56.882 [391/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:56.882 [392/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:57.202 [393/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:57.202 [394/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:57.202 [395/710] Linking static target lib/librte_sched.a 00:02:57.460 [396/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:57.460 [397/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.719 [398/710] Linking target lib/librte_sched.so.24.0 00:02:57.719 [399/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:57.719 [400/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:57.719 [401/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:57.977 [402/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:57.977 [403/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:58.235 [404/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:58.493 [405/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:58.493 [406/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:58.493 [407/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:58.752 [408/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:58.752 [409/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:58.752 [410/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:59.011 [411/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:59.011 [412/710] Linking static target lib/librte_ipsec.a 00:02:59.011 [413/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:59.270 [414/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.270 [415/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:59.270 [416/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:59.270 [417/710] Linking target lib/librte_ipsec.so.24.0 00:02:59.270 [418/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:59.270 [419/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:59.529 [420/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:59.529 [421/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:59.529 [422/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:59.529 [423/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:00.464 [424/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:00.464 [425/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:00.464 [426/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:00.464 [427/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:00.464 [428/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:00.464 [429/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:00.464 [430/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:03:00.464 [431/710] Linking static target lib/librte_fib.a 00:03:00.464 [432/710] Linking static target lib/librte_pdcp.a 00:03:00.722 [433/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.722 [434/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.980 [435/710] Linking target lib/librte_fib.so.24.0 00:03:00.980 [436/710] Linking target lib/librte_pdcp.so.24.0 00:03:00.980 [437/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:01.547 [438/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:01.547 [439/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:01.547 [440/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:01.547 [441/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:01.547 [442/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:01.805 [443/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:01.805 [444/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:02.063 [445/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:02.063 [446/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:02.063 [447/710] Linking static target lib/librte_port.a 00:03:02.321 [448/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:02.322 [449/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:02.322 [450/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:02.580 [451/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:02.580 [452/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:02.580 [453/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:02.580 [454/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.838 [455/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:02.838 [456/710] Linking static target lib/librte_pdump.a 00:03:02.838 [457/710] Linking target lib/librte_port.so.24.0 00:03:02.838 [458/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:03:03.096 [459/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:03.096 [460/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.096 [461/710] Linking target lib/librte_pdump.so.24.0 00:03:03.096 [462/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:03.663 [463/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:03.663 [464/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:03.663 [465/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:03.663 [466/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:03.663 [467/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:03.663 [468/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:03.922 [469/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:03.922 [470/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:04.180 [471/710] Linking static target lib/librte_table.a 00:03:04.180 [472/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:04.180 [473/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:04.746 [474/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.746 [475/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:04.746 [476/710] Linking target lib/librte_table.so.24.0 00:03:05.004 [477/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:03:05.004 [478/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:05.004 [479/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:05.262 [480/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:03:05.262 [481/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:05.521 [482/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:05.521 [483/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:05.779 [484/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:05.779 [485/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:05.779 [486/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:06.351 [487/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:06.351 [488/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:06.351 [489/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:06.351 [490/710] Linking static target lib/librte_graph.a 00:03:06.351 [491/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:06.609 [492/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:06.609 [493/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:06.868 [494/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.126 [495/710] Linking target lib/librte_graph.so.24.0 00:03:07.126 [496/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:07.126 [497/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:07.127 [498/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:03:07.127 [499/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:07.693 [500/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:07.693 [501/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:07.693 [502/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:07.693 [503/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:07.693 [504/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:07.951 [505/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:07.951 [506/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:08.209 [507/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:08.209 [508/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:08.468 [509/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:08.468 [510/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:08.468 [511/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:08.725 [512/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:08.725 [513/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:08.725 [514/710] Linking static target lib/librte_node.a 00:03:08.725 [515/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:08.983 [516/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:08.983 [517/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:08.983 [518/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:08.983 [519/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.253 [520/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:09.253 [521/710] Linking target lib/librte_node.so.24.0 00:03:09.253 [522/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:09.253 [523/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:09.253 [524/710] Linking static target drivers/librte_bus_vdev.a 00:03:09.536 [525/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:09.536 [526/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:09.536 [527/710] Linking static target drivers/librte_bus_pci.a 00:03:09.536 [528/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.536 [529/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:09.536 [530/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:09.536 [531/710] Linking target drivers/librte_bus_vdev.so.24.0 00:03:09.794 [532/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:09.794 [533/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:09.794 [534/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:03:09.794 [535/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:09.794 [536/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:09.794 [537/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:10.053 [538/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.053 [539/710] Linking target drivers/librte_bus_pci.so.24.0 00:03:10.053 [540/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:10.053 [541/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:10.053 [542/710] Linking static target drivers/librte_mempool_ring.a 00:03:10.053 [543/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:10.053 [544/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:03:10.053 [545/710] Linking target drivers/librte_mempool_ring.so.24.0 00:03:10.315 [546/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:10.578 [547/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:11.145 [548/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:11.145 [549/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:11.145 [550/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:11.145 [551/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:12.079 [552/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:12.079 [553/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:12.079 [554/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:12.079 [555/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:12.079 [556/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:12.079 [557/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:12.645 [558/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:12.645 [559/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:12.903 [560/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:12.903 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:12.903 [562/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:13.470 [563/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:13.470 [564/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:13.728 [565/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:13.728 [566/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:13.987 [567/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:14.244 [568/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:14.244 [569/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:14.244 [570/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:14.502 [571/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:14.502 [572/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:14.502 [573/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:14.759 [574/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:15.016 [575/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:15.017 [576/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:15.017 [577/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:15.017 [578/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:15.275 [579/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:15.275 [580/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:15.275 [581/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:15.275 [582/710] Linking static target lib/librte_vhost.a 00:03:15.533 [583/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:15.533 [584/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:15.533 [585/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:15.791 [586/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:15.791 [587/710] Linking static target drivers/librte_net_i40e.a 00:03:15.791 [588/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:15.791 [589/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:15.791 [590/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:15.791 [591/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:15.791 [592/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:16.357 [593/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:16.357 [594/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.357 [595/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:16.357 [596/710] Linking target drivers/librte_net_i40e.so.24.0 00:03:16.615 [597/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:16.615 [598/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.615 [599/710] Linking target lib/librte_vhost.so.24.0 00:03:16.873 [600/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:17.131 [601/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:17.131 [602/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:17.131 [603/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:17.390 [604/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:17.390 [605/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:17.390 [606/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:17.649 [607/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:17.907 [608/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:17.907 [609/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:18.166 [610/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:18.166 [611/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:18.166 [612/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:18.166 [613/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:18.424 [614/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:18.424 [615/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:18.424 [616/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:18.424 [617/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:18.683 [618/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:18.941 [619/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:19.199 [620/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:19.199 [621/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:19.199 [622/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:19.458 [623/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:20.024 [624/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:20.283 [625/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:20.283 [626/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:20.283 [627/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:20.541 [628/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:20.541 [629/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:20.541 [630/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:20.541 [631/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:20.800 [632/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:20.800 [633/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:21.103 [634/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:21.103 [635/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:21.103 [636/710] Linking static target lib/librte_pipeline.a 00:03:21.103 [637/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:21.103 [638/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:21.370 [639/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:21.370 [640/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:21.629 [641/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:21.629 [642/710] Linking target app/dpdk-dumpcap 00:03:21.629 [643/710] Linking target app/dpdk-graph 00:03:21.629 [644/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:21.629 [645/710] Linking target app/dpdk-pdump 00:03:21.887 [646/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:21.887 [647/710] Linking target app/dpdk-proc-info 00:03:21.887 [648/710] Linking target app/dpdk-test-acl 00:03:22.145 [649/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:22.145 [650/710] Linking target app/dpdk-test-cmdline 00:03:22.145 [651/710] Linking target app/dpdk-test-compress-perf 00:03:22.145 [652/710] Linking target app/dpdk-test-crypto-perf 00:03:22.145 [653/710] Linking target app/dpdk-test-dma-perf 00:03:22.403 [654/710] Linking target app/dpdk-test-fib 00:03:22.403 [655/710] Linking target app/dpdk-test-flow-perf 00:03:22.403 [656/710] Linking target app/dpdk-test-gpudev 00:03:22.403 [657/710] Linking target app/dpdk-test-eventdev 00:03:22.662 [658/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:22.662 [659/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:22.921 [660/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:22.921 [661/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:22.921 [662/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:23.179 [663/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:23.179 [664/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:23.179 [665/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:23.437 [666/710] Linking target app/dpdk-test-bbdev 00:03:23.437 [667/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:23.437 [668/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:23.696 [669/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:23.696 [670/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:23.696 [671/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:23.955 [672/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.955 [673/710] Linking target lib/librte_pipeline.so.24.0 00:03:23.955 [674/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:24.213 [675/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:24.213 [676/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:24.213 [677/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:24.472 [678/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:24.731 [679/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:24.731 [680/710] Linking target app/dpdk-test-pipeline 00:03:24.731 [681/710] Linking target app/dpdk-test-mldev 00:03:24.731 [682/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:24.989 [683/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:25.555 [684/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:25.555 [685/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:25.555 [686/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:25.555 [687/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:25.555 [688/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:25.813 [689/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:26.071 [690/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:26.071 [691/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:26.071 [692/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:26.071 [693/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:26.638 [694/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:26.897 [695/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:26.897 [696/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:27.155 [697/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:27.413 [698/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:27.413 [699/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:27.413 [700/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:27.413 [701/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:27.672 [702/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:27.672 [703/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:27.672 [704/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:27.930 [705/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:27.930 [706/710] Linking target app/dpdk-test-regex 00:03:27.930 [707/710] Linking target app/dpdk-test-sad 00:03:28.496 [708/710] Linking target app/dpdk-testpmd 00:03:28.496 [709/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:28.756 [710/710] Linking target app/dpdk-test-security-perf 00:03:28.756 12:23:33 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:28.756 12:23:33 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:28.756 12:23:33 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:29.015 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:29.015 [0/1] Installing files. 00:03:29.278 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:29.278 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.279 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:29.280 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:29.281 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:29.282 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:29.283 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:29.283 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.283 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.542 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.803 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.803 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.804 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.804 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:29.804 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.804 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:29.804 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.804 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:29.804 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.804 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:29.804 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:29.804 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:29.804 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:29.804 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:29.804 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:29.804 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:29.804 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:29.804 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:29.804 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:29.804 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:29.804 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:29.804 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:29.804 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:29.804 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:29.804 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:29.804 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:29.804 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:29.804 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:29.804 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:29.804 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.804 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.805 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.806 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.807 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.807 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.807 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.807 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.807 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.807 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.807 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.807 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.807 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.807 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.807 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.807 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.807 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.807 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.807 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.807 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.807 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.807 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.807 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.807 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.807 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.807 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:29.807 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:29.807 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:29.807 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:29.807 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:29.807 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:29.807 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.807 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:29.807 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:29.807 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:29.807 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:29.807 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:29.807 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:29.807 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:29.807 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:29.807 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:29.807 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:29.807 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:29.807 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:29.807 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:29.807 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:29.807 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:29.807 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:29.807 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:29.807 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:29.807 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:29.807 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:29.807 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:29.807 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:29.807 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:29.807 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:29.807 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:29.807 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:29.807 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:29.807 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:29.807 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:29.807 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:29.807 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:29.807 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:29.807 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:29.807 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:29.807 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:29.807 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:29.807 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:29.807 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:29.807 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:29.807 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:29.807 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:29.807 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:29.807 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:29.807 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:29.807 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:29.807 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:29.807 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:29.807 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:29.807 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:29.807 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:29.807 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:29.807 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:29.807 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:29.807 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:29.807 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:29.807 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:29.807 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:29.807 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:29.807 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:29.807 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:29.807 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:29.807 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:29.807 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:29.807 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:29.807 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:29.807 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:29.807 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:29.807 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:29.807 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:29.807 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:29.807 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:29.807 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:29.807 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:29.807 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:29.807 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:29.807 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:29.807 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:29.808 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:29.808 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:29.808 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:29.808 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:29.808 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:29.808 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:29.808 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:29.808 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:29.808 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:29.808 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:29.808 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:29.808 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:29.808 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:29.808 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:29.808 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:29.808 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:29.808 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:29.808 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:29.808 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:29.808 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:29.808 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:29.808 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:29.808 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:29.808 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:29.808 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:29.808 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:29.808 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:29.808 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:29.808 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:29.808 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:29.808 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:29.808 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:29.808 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:29.808 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:29.808 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:29.808 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:29.808 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:29.808 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:29.808 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:29.808 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:29.808 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:29.808 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:29.808 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:29.808 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:29.808 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:29.808 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:29.808 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:29.808 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:29.808 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:29.808 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:29.808 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:29.808 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:29.808 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:29.808 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:29.808 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:29.808 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:29.808 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:29.808 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:29.808 12:23:35 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:29.808 12:23:35 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:29.808 00:03:29.808 real 1m2.808s 00:03:29.808 user 7m43.116s 00:03:29.808 sys 1m4.705s 00:03:29.808 12:23:35 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:29.808 ************************************ 00:03:29.808 END TEST build_native_dpdk 00:03:29.808 ************************************ 00:03:29.808 12:23:35 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:30.066 12:23:35 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:30.066 12:23:35 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:30.066 12:23:35 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:30.066 12:23:35 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:30.066 12:23:35 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:30.066 12:23:35 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:30.066 12:23:35 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:30.067 12:23:35 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:03:30.067 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:30.325 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.325 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:30.325 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:30.583 Using 'verbs' RDMA provider 00:03:43.754 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:58.635 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:58.635 Creating mk/config.mk...done. 00:03:58.635 Creating mk/cc.flags.mk...done. 00:03:58.635 Type 'make' to build. 00:03:58.635 12:24:02 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:58.635 12:24:02 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:58.635 12:24:02 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:58.635 12:24:02 -- common/autotest_common.sh@10 -- $ set +x 00:03:58.635 ************************************ 00:03:58.635 START TEST make 00:03:58.635 ************************************ 00:03:58.635 12:24:02 make -- common/autotest_common.sh@1125 -- $ make -j10 00:03:58.635 make[1]: Nothing to be done for 'all'. 00:03:58.894 The Meson build system 00:03:58.894 Version: 1.5.0 00:03:58.895 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:03:58.895 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:03:58.895 Build type: native build 00:03:58.895 Project name: libvfio-user 00:03:58.895 Project version: 0.0.1 00:03:58.895 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:58.895 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:58.895 Host machine cpu family: x86_64 00:03:58.895 Host machine cpu: x86_64 00:03:58.895 Run-time dependency threads found: YES 00:03:58.895 Library dl found: YES 00:03:58.895 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:58.895 Run-time dependency json-c found: YES 0.17 00:03:58.895 Run-time dependency cmocka found: YES 1.1.7 00:03:58.895 Program pytest-3 found: NO 00:03:58.895 Program flake8 found: NO 00:03:58.895 Program misspell-fixer found: NO 00:03:58.895 Program restructuredtext-lint found: NO 00:03:58.895 Program valgrind found: YES (/usr/bin/valgrind) 00:03:58.895 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:58.895 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:58.895 Compiler for C supports arguments -Wwrite-strings: YES 00:03:58.895 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:58.895 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:03:58.895 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:03:58.895 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:58.895 Build targets in project: 8 00:03:58.895 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:58.895 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:58.895 00:03:58.895 libvfio-user 0.0.1 00:03:58.895 00:03:58.895 User defined options 00:03:58.895 buildtype : debug 00:03:58.895 default_library: shared 00:03:58.895 libdir : /usr/local/lib 00:03:58.895 00:03:58.895 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:59.154 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:03:59.413 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:59.413 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:59.413 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:59.413 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:59.413 [5/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:59.413 [6/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:59.413 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:59.413 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:59.413 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:59.413 [10/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:59.672 [11/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:59.672 [12/37] Compiling C object samples/null.p/null.c.o 00:03:59.672 [13/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:59.672 [14/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:59.672 [15/37] Compiling C object samples/client.p/client.c.o 00:03:59.672 [16/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:59.672 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:59.672 [18/37] Compiling C object samples/server.p/server.c.o 00:03:59.672 [19/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:59.672 [20/37] Linking target samples/client 00:03:59.672 [21/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:59.672 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:59.672 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:59.672 [24/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:59.672 [25/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:59.672 [26/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:59.672 [27/37] Linking target lib/libvfio-user.so.0.0.1 00:03:59.932 [28/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:59.932 [29/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:59.932 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:59.932 [31/37] Linking target test/unit_tests 00:03:59.932 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:59.932 [33/37] Linking target samples/null 00:03:59.932 [34/37] Linking target samples/shadow_ioeventfd_server 00:03:59.932 [35/37] Linking target samples/server 00:03:59.932 [36/37] Linking target samples/gpio-pci-idio-16 00:03:59.932 [37/37] Linking target samples/lspci 00:03:59.932 INFO: autodetecting backend as ninja 00:03:59.932 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:03:59.932 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:04:00.499 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:04:00.499 ninja: no work to do. 00:04:56.731 CC lib/ut/ut.o 00:04:56.731 CC lib/ut_mock/mock.o 00:04:56.731 CC lib/log/log.o 00:04:56.731 CC lib/log/log_flags.o 00:04:56.731 CC lib/log/log_deprecated.o 00:04:56.731 LIB libspdk_ut.a 00:04:56.731 LIB libspdk_log.a 00:04:56.731 SO libspdk_ut.so.2.0 00:04:56.731 LIB libspdk_ut_mock.a 00:04:56.731 SO libspdk_log.so.7.0 00:04:56.731 SO libspdk_ut_mock.so.6.0 00:04:56.731 SYMLINK libspdk_ut.so 00:04:56.731 SYMLINK libspdk_log.so 00:04:56.731 SYMLINK libspdk_ut_mock.so 00:04:56.731 CC lib/util/base64.o 00:04:56.731 CC lib/util/cpuset.o 00:04:56.731 CC lib/util/bit_array.o 00:04:56.731 CC lib/util/crc16.o 00:04:56.731 CC lib/util/crc32.o 00:04:56.731 CC lib/util/crc32c.o 00:04:56.731 CC lib/ioat/ioat.o 00:04:56.731 CXX lib/trace_parser/trace.o 00:04:56.731 CC lib/dma/dma.o 00:04:56.731 CC lib/vfio_user/host/vfio_user_pci.o 00:04:56.731 CC lib/util/crc32_ieee.o 00:04:56.731 CC lib/util/crc64.o 00:04:56.731 CC lib/util/dif.o 00:04:56.731 CC lib/util/fd.o 00:04:56.731 CC lib/util/fd_group.o 00:04:56.731 LIB libspdk_dma.a 00:04:56.731 CC lib/util/file.o 00:04:56.731 SO libspdk_dma.so.5.0 00:04:56.731 LIB libspdk_ioat.a 00:04:56.731 CC lib/util/hexlify.o 00:04:56.731 CC lib/util/iov.o 00:04:56.731 SO libspdk_ioat.so.7.0 00:04:56.731 SYMLINK libspdk_dma.so 00:04:56.731 CC lib/vfio_user/host/vfio_user.o 00:04:56.731 CC lib/util/math.o 00:04:56.731 CC lib/util/net.o 00:04:56.731 SYMLINK libspdk_ioat.so 00:04:56.731 CC lib/util/pipe.o 00:04:56.731 CC lib/util/strerror_tls.o 00:04:56.731 CC lib/util/string.o 00:04:56.731 CC lib/util/uuid.o 00:04:56.731 CC lib/util/xor.o 00:04:56.731 CC lib/util/zipf.o 00:04:56.731 CC lib/util/md5.o 00:04:56.731 LIB libspdk_vfio_user.a 00:04:56.731 SO libspdk_vfio_user.so.5.0 00:04:56.731 SYMLINK libspdk_vfio_user.so 00:04:56.731 LIB libspdk_util.a 00:04:56.731 SO libspdk_util.so.10.0 00:04:56.731 SYMLINK libspdk_util.so 00:04:56.731 LIB libspdk_trace_parser.a 00:04:56.731 SO libspdk_trace_parser.so.6.0 00:04:56.731 CC lib/rdma_utils/rdma_utils.o 00:04:56.731 CC lib/vmd/vmd.o 00:04:56.731 CC lib/rdma_provider/common.o 00:04:56.731 CC lib/vmd/led.o 00:04:56.731 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:56.731 CC lib/conf/conf.o 00:04:56.731 CC lib/env_dpdk/env.o 00:04:56.731 CC lib/idxd/idxd.o 00:04:56.731 CC lib/json/json_parse.o 00:04:56.731 SYMLINK libspdk_trace_parser.so 00:04:56.731 CC lib/json/json_util.o 00:04:56.731 CC lib/json/json_write.o 00:04:56.731 CC lib/idxd/idxd_user.o 00:04:56.731 LIB libspdk_rdma_provider.a 00:04:56.731 SO libspdk_rdma_provider.so.6.0 00:04:56.731 LIB libspdk_conf.a 00:04:56.731 CC lib/idxd/idxd_kernel.o 00:04:56.731 SO libspdk_conf.so.6.0 00:04:56.731 LIB libspdk_rdma_utils.a 00:04:56.731 SYMLINK libspdk_rdma_provider.so 00:04:56.731 CC lib/env_dpdk/memory.o 00:04:56.731 CC lib/env_dpdk/pci.o 00:04:56.731 SO libspdk_rdma_utils.so.1.0 00:04:56.731 SYMLINK libspdk_conf.so 00:04:56.731 CC lib/env_dpdk/init.o 00:04:56.731 SYMLINK libspdk_rdma_utils.so 00:04:56.731 CC lib/env_dpdk/threads.o 00:04:56.731 CC lib/env_dpdk/pci_ioat.o 00:04:56.731 CC lib/env_dpdk/pci_virtio.o 00:04:56.731 LIB libspdk_json.a 00:04:56.731 SO libspdk_json.so.6.0 00:04:56.731 CC lib/env_dpdk/pci_vmd.o 00:04:56.731 CC lib/env_dpdk/pci_idxd.o 00:04:56.731 SYMLINK libspdk_json.so 00:04:56.731 CC lib/env_dpdk/pci_event.o 00:04:56.731 LIB libspdk_idxd.a 00:04:56.731 LIB libspdk_vmd.a 00:04:56.731 SO libspdk_idxd.so.12.1 00:04:56.731 CC lib/env_dpdk/sigbus_handler.o 00:04:56.731 SO libspdk_vmd.so.6.0 00:04:56.731 SYMLINK libspdk_idxd.so 00:04:56.731 CC lib/env_dpdk/pci_dpdk.o 00:04:56.731 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:56.731 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:56.731 SYMLINK libspdk_vmd.so 00:04:56.731 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:56.731 CC lib/jsonrpc/jsonrpc_server.o 00:04:56.731 CC lib/jsonrpc/jsonrpc_client.o 00:04:56.731 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:56.731 LIB libspdk_jsonrpc.a 00:04:56.731 SO libspdk_jsonrpc.so.6.0 00:04:56.731 SYMLINK libspdk_jsonrpc.so 00:04:56.731 CC lib/rpc/rpc.o 00:04:56.731 LIB libspdk_env_dpdk.a 00:04:56.731 SO libspdk_env_dpdk.so.15.0 00:04:56.731 LIB libspdk_rpc.a 00:04:56.731 SYMLINK libspdk_env_dpdk.so 00:04:56.731 SO libspdk_rpc.so.6.0 00:04:56.731 SYMLINK libspdk_rpc.so 00:04:56.731 CC lib/keyring/keyring.o 00:04:56.731 CC lib/keyring/keyring_rpc.o 00:04:56.731 CC lib/trace/trace.o 00:04:56.731 CC lib/trace/trace_flags.o 00:04:56.731 CC lib/trace/trace_rpc.o 00:04:56.731 CC lib/notify/notify.o 00:04:56.731 CC lib/notify/notify_rpc.o 00:04:56.731 LIB libspdk_notify.a 00:04:56.731 SO libspdk_notify.so.6.0 00:04:56.731 LIB libspdk_keyring.a 00:04:56.731 SO libspdk_keyring.so.2.0 00:04:56.731 LIB libspdk_trace.a 00:04:56.731 SYMLINK libspdk_notify.so 00:04:56.731 SYMLINK libspdk_keyring.so 00:04:56.731 SO libspdk_trace.so.11.0 00:04:56.731 SYMLINK libspdk_trace.so 00:04:56.731 CC lib/thread/thread.o 00:04:56.731 CC lib/thread/iobuf.o 00:04:56.731 CC lib/sock/sock.o 00:04:56.731 CC lib/sock/sock_rpc.o 00:04:56.731 LIB libspdk_sock.a 00:04:56.731 SO libspdk_sock.so.10.0 00:04:56.731 SYMLINK libspdk_sock.so 00:04:56.731 CC lib/nvme/nvme_ctrlr.o 00:04:56.731 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:56.731 CC lib/nvme/nvme_fabric.o 00:04:56.731 CC lib/nvme/nvme_ns_cmd.o 00:04:56.731 CC lib/nvme/nvme_ns.o 00:04:56.731 CC lib/nvme/nvme_pcie_common.o 00:04:56.731 CC lib/nvme/nvme_pcie.o 00:04:56.731 CC lib/nvme/nvme.o 00:04:56.731 CC lib/nvme/nvme_qpair.o 00:04:56.731 CC lib/nvme/nvme_quirks.o 00:04:56.731 CC lib/nvme/nvme_transport.o 00:04:56.731 CC lib/nvme/nvme_discovery.o 00:04:56.731 LIB libspdk_thread.a 00:04:56.731 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:56.731 SO libspdk_thread.so.10.1 00:04:56.731 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:56.731 SYMLINK libspdk_thread.so 00:04:56.731 CC lib/nvme/nvme_tcp.o 00:04:56.731 CC lib/nvme/nvme_opal.o 00:04:56.731 CC lib/nvme/nvme_io_msg.o 00:04:56.732 CC lib/nvme/nvme_poll_group.o 00:04:56.732 CC lib/nvme/nvme_zns.o 00:04:56.732 CC lib/nvme/nvme_stubs.o 00:04:56.732 CC lib/nvme/nvme_auth.o 00:04:56.732 CC lib/nvme/nvme_cuse.o 00:04:56.732 CC lib/nvme/nvme_vfio_user.o 00:04:56.732 CC lib/accel/accel.o 00:04:56.732 CC lib/blob/blobstore.o 00:04:56.732 CC lib/blob/request.o 00:04:56.732 CC lib/nvme/nvme_rdma.o 00:04:56.732 CC lib/blob/zeroes.o 00:04:56.991 CC lib/blob/blob_bs_dev.o 00:04:56.991 CC lib/accel/accel_rpc.o 00:04:57.249 CC lib/init/json_config.o 00:04:57.249 CC lib/virtio/virtio.o 00:04:57.249 CC lib/virtio/virtio_vhost_user.o 00:04:57.249 CC lib/virtio/virtio_vfio_user.o 00:04:57.249 CC lib/virtio/virtio_pci.o 00:04:57.249 CC lib/init/subsystem.o 00:04:57.507 CC lib/vfu_tgt/tgt_endpoint.o 00:04:57.507 CC lib/fsdev/fsdev.o 00:04:57.507 CC lib/fsdev/fsdev_io.o 00:04:57.507 CC lib/fsdev/fsdev_rpc.o 00:04:57.507 CC lib/accel/accel_sw.o 00:04:57.507 CC lib/vfu_tgt/tgt_rpc.o 00:04:57.507 LIB libspdk_virtio.a 00:04:57.507 CC lib/init/subsystem_rpc.o 00:04:57.507 SO libspdk_virtio.so.7.0 00:04:57.765 CC lib/init/rpc.o 00:04:57.765 SYMLINK libspdk_virtio.so 00:04:57.765 LIB libspdk_vfu_tgt.a 00:04:57.765 SO libspdk_vfu_tgt.so.3.0 00:04:57.765 LIB libspdk_accel.a 00:04:57.765 LIB libspdk_init.a 00:04:57.765 SYMLINK libspdk_vfu_tgt.so 00:04:58.024 SO libspdk_init.so.6.0 00:04:58.024 SO libspdk_accel.so.16.0 00:04:58.024 SYMLINK libspdk_init.so 00:04:58.024 SYMLINK libspdk_accel.so 00:04:58.024 LIB libspdk_nvme.a 00:04:58.024 LIB libspdk_fsdev.a 00:04:58.282 CC lib/event/app.o 00:04:58.282 CC lib/event/reactor.o 00:04:58.282 CC lib/event/app_rpc.o 00:04:58.282 CC lib/event/log_rpc.o 00:04:58.282 CC lib/event/scheduler_static.o 00:04:58.282 CC lib/bdev/bdev_rpc.o 00:04:58.282 CC lib/bdev/bdev.o 00:04:58.282 SO libspdk_fsdev.so.1.0 00:04:58.282 SYMLINK libspdk_fsdev.so 00:04:58.282 CC lib/bdev/bdev_zone.o 00:04:58.282 CC lib/bdev/part.o 00:04:58.282 SO libspdk_nvme.so.14.0 00:04:58.282 CC lib/bdev/scsi_nvme.o 00:04:58.540 SYMLINK libspdk_nvme.so 00:04:58.540 LIB libspdk_event.a 00:04:58.540 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:58.798 SO libspdk_event.so.14.0 00:04:58.798 SYMLINK libspdk_event.so 00:04:59.365 LIB libspdk_fuse_dispatcher.a 00:04:59.365 SO libspdk_fuse_dispatcher.so.1.0 00:04:59.365 SYMLINK libspdk_fuse_dispatcher.so 00:04:59.624 LIB libspdk_blob.a 00:04:59.624 SO libspdk_blob.so.11.0 00:04:59.883 SYMLINK libspdk_blob.so 00:05:00.142 CC lib/blobfs/blobfs.o 00:05:00.142 CC lib/blobfs/tree.o 00:05:00.142 CC lib/lvol/lvol.o 00:05:01.077 LIB libspdk_bdev.a 00:05:01.077 LIB libspdk_blobfs.a 00:05:01.077 SO libspdk_bdev.so.16.0 00:05:01.077 SO libspdk_blobfs.so.10.0 00:05:01.077 LIB libspdk_lvol.a 00:05:01.077 SO libspdk_lvol.so.10.0 00:05:01.077 SYMLINK libspdk_bdev.so 00:05:01.077 SYMLINK libspdk_blobfs.so 00:05:01.077 SYMLINK libspdk_lvol.so 00:05:01.336 CC lib/nbd/nbd.o 00:05:01.336 CC lib/nbd/nbd_rpc.o 00:05:01.336 CC lib/ublk/ublk.o 00:05:01.336 CC lib/ublk/ublk_rpc.o 00:05:01.336 CC lib/scsi/dev.o 00:05:01.336 CC lib/scsi/lun.o 00:05:01.336 CC lib/scsi/port.o 00:05:01.336 CC lib/scsi/scsi.o 00:05:01.336 CC lib/ftl/ftl_core.o 00:05:01.336 CC lib/nvmf/ctrlr.o 00:05:01.336 CC lib/scsi/scsi_bdev.o 00:05:01.336 CC lib/nvmf/ctrlr_discovery.o 00:05:01.336 CC lib/nvmf/ctrlr_bdev.o 00:05:01.336 CC lib/scsi/scsi_pr.o 00:05:01.595 CC lib/scsi/scsi_rpc.o 00:05:01.595 CC lib/scsi/task.o 00:05:01.595 CC lib/ftl/ftl_init.o 00:05:01.595 LIB libspdk_nbd.a 00:05:01.595 SO libspdk_nbd.so.7.0 00:05:01.854 CC lib/ftl/ftl_layout.o 00:05:01.854 SYMLINK libspdk_nbd.so 00:05:01.854 CC lib/ftl/ftl_debug.o 00:05:01.854 CC lib/ftl/ftl_io.o 00:05:01.854 CC lib/ftl/ftl_sb.o 00:05:01.854 CC lib/ftl/ftl_l2p.o 00:05:01.854 LIB libspdk_scsi.a 00:05:01.854 CC lib/nvmf/subsystem.o 00:05:01.854 LIB libspdk_ublk.a 00:05:02.112 SO libspdk_scsi.so.9.0 00:05:02.112 SO libspdk_ublk.so.3.0 00:05:02.112 CC lib/ftl/ftl_l2p_flat.o 00:05:02.112 CC lib/nvmf/nvmf.o 00:05:02.112 SYMLINK libspdk_ublk.so 00:05:02.112 CC lib/nvmf/nvmf_rpc.o 00:05:02.112 CC lib/ftl/ftl_nv_cache.o 00:05:02.112 SYMLINK libspdk_scsi.so 00:05:02.112 CC lib/ftl/ftl_band.o 00:05:02.112 CC lib/nvmf/transport.o 00:05:02.112 CC lib/nvmf/tcp.o 00:05:02.112 CC lib/nvmf/stubs.o 00:05:02.112 CC lib/ftl/ftl_band_ops.o 00:05:02.679 CC lib/ftl/ftl_writer.o 00:05:02.679 CC lib/ftl/ftl_rq.o 00:05:02.679 CC lib/nvmf/mdns_server.o 00:05:02.679 CC lib/nvmf/vfio_user.o 00:05:02.679 CC lib/nvmf/rdma.o 00:05:02.938 CC lib/nvmf/auth.o 00:05:03.197 CC lib/ftl/ftl_reloc.o 00:05:03.197 CC lib/ftl/ftl_l2p_cache.o 00:05:03.197 CC lib/ftl/ftl_p2l.o 00:05:03.197 CC lib/iscsi/conn.o 00:05:03.197 CC lib/vhost/vhost.o 00:05:03.197 CC lib/vhost/vhost_rpc.o 00:05:03.456 CC lib/vhost/vhost_scsi.o 00:05:03.456 CC lib/vhost/vhost_blk.o 00:05:03.714 CC lib/ftl/ftl_p2l_log.o 00:05:03.714 CC lib/ftl/mngt/ftl_mngt.o 00:05:03.714 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:03.714 CC lib/iscsi/init_grp.o 00:05:03.973 CC lib/vhost/rte_vhost_user.o 00:05:03.973 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:03.973 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:03.973 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:03.973 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:04.231 CC lib/iscsi/iscsi.o 00:05:04.231 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:04.231 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:04.489 CC lib/iscsi/param.o 00:05:04.489 CC lib/iscsi/portal_grp.o 00:05:04.489 CC lib/iscsi/tgt_node.o 00:05:04.489 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:04.489 CC lib/iscsi/iscsi_subsystem.o 00:05:04.489 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:04.489 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:04.747 CC lib/iscsi/iscsi_rpc.o 00:05:04.747 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:04.747 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:04.747 CC lib/ftl/utils/ftl_conf.o 00:05:04.747 CC lib/ftl/utils/ftl_md.o 00:05:05.005 CC lib/iscsi/task.o 00:05:05.005 LIB libspdk_nvmf.a 00:05:05.005 CC lib/ftl/utils/ftl_mempool.o 00:05:05.005 CC lib/ftl/utils/ftl_bitmap.o 00:05:05.005 CC lib/ftl/utils/ftl_property.o 00:05:05.005 LIB libspdk_vhost.a 00:05:05.264 SO libspdk_nvmf.so.19.0 00:05:05.264 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:05.264 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:05.264 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:05.264 SO libspdk_vhost.so.8.0 00:05:05.264 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:05.264 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:05.264 SYMLINK libspdk_vhost.so 00:05:05.264 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:05.264 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:05.264 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:05.264 SYMLINK libspdk_nvmf.so 00:05:05.264 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:05.522 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:05.522 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:05.522 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:05.522 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:05.522 CC lib/ftl/base/ftl_base_dev.o 00:05:05.522 CC lib/ftl/base/ftl_base_bdev.o 00:05:05.522 LIB libspdk_iscsi.a 00:05:05.522 CC lib/ftl/ftl_trace.o 00:05:05.522 SO libspdk_iscsi.so.8.0 00:05:05.781 SYMLINK libspdk_iscsi.so 00:05:05.781 LIB libspdk_ftl.a 00:05:06.040 SO libspdk_ftl.so.9.0 00:05:06.298 SYMLINK libspdk_ftl.so 00:05:06.866 CC module/env_dpdk/env_dpdk_rpc.o 00:05:06.866 CC module/vfu_device/vfu_virtio.o 00:05:06.866 CC module/fsdev/aio/fsdev_aio.o 00:05:06.866 CC module/keyring/file/keyring.o 00:05:06.866 CC module/accel/error/accel_error.o 00:05:06.866 CC module/keyring/linux/keyring.o 00:05:06.866 CC module/sock/posix/posix.o 00:05:06.866 CC module/sock/uring/uring.o 00:05:06.866 CC module/blob/bdev/blob_bdev.o 00:05:06.866 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:06.866 LIB libspdk_env_dpdk_rpc.a 00:05:06.866 SO libspdk_env_dpdk_rpc.so.6.0 00:05:06.866 SYMLINK libspdk_env_dpdk_rpc.so 00:05:06.866 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:06.866 CC module/keyring/file/keyring_rpc.o 00:05:06.866 CC module/keyring/linux/keyring_rpc.o 00:05:07.133 CC module/accel/error/accel_error_rpc.o 00:05:07.133 LIB libspdk_scheduler_dynamic.a 00:05:07.133 SO libspdk_scheduler_dynamic.so.4.0 00:05:07.133 LIB libspdk_keyring_file.a 00:05:07.133 LIB libspdk_keyring_linux.a 00:05:07.133 LIB libspdk_blob_bdev.a 00:05:07.133 CC module/fsdev/aio/linux_aio_mgr.o 00:05:07.133 SYMLINK libspdk_scheduler_dynamic.so 00:05:07.133 SO libspdk_keyring_linux.so.1.0 00:05:07.133 SO libspdk_keyring_file.so.2.0 00:05:07.133 SO libspdk_blob_bdev.so.11.0 00:05:07.133 LIB libspdk_accel_error.a 00:05:07.133 SYMLINK libspdk_keyring_linux.so 00:05:07.133 SYMLINK libspdk_keyring_file.so 00:05:07.133 SYMLINK libspdk_blob_bdev.so 00:05:07.404 SO libspdk_accel_error.so.2.0 00:05:07.404 SYMLINK libspdk_accel_error.so 00:05:07.404 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:07.404 CC module/vfu_device/vfu_virtio_blk.o 00:05:07.404 CC module/vfu_device/vfu_virtio_scsi.o 00:05:07.404 CC module/accel/ioat/accel_ioat.o 00:05:07.404 CC module/accel/dsa/accel_dsa.o 00:05:07.404 CC module/scheduler/gscheduler/gscheduler.o 00:05:07.676 LIB libspdk_scheduler_dpdk_governor.a 00:05:07.676 LIB libspdk_fsdev_aio.a 00:05:07.676 CC module/accel/iaa/accel_iaa.o 00:05:07.676 LIB libspdk_sock_uring.a 00:05:07.676 SO libspdk_fsdev_aio.so.1.0 00:05:07.676 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:07.676 LIB libspdk_sock_posix.a 00:05:07.676 LIB libspdk_scheduler_gscheduler.a 00:05:07.676 SO libspdk_sock_uring.so.5.0 00:05:07.676 SO libspdk_scheduler_gscheduler.so.4.0 00:05:07.676 SO libspdk_sock_posix.so.6.0 00:05:07.676 CC module/accel/ioat/accel_ioat_rpc.o 00:05:07.676 SYMLINK libspdk_fsdev_aio.so 00:05:07.676 SYMLINK libspdk_sock_uring.so 00:05:07.676 CC module/accel/iaa/accel_iaa_rpc.o 00:05:07.676 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:07.676 CC module/vfu_device/vfu_virtio_rpc.o 00:05:07.676 CC module/accel/dsa/accel_dsa_rpc.o 00:05:07.676 CC module/vfu_device/vfu_virtio_fs.o 00:05:07.676 SYMLINK libspdk_scheduler_gscheduler.so 00:05:07.676 SYMLINK libspdk_sock_posix.so 00:05:07.934 LIB libspdk_accel_ioat.a 00:05:07.934 LIB libspdk_accel_dsa.a 00:05:07.934 LIB libspdk_accel_iaa.a 00:05:07.934 SO libspdk_accel_ioat.so.6.0 00:05:07.934 SO libspdk_accel_iaa.so.3.0 00:05:07.934 SO libspdk_accel_dsa.so.5.0 00:05:07.934 SYMLINK libspdk_accel_ioat.so 00:05:07.934 SYMLINK libspdk_accel_dsa.so 00:05:07.934 SYMLINK libspdk_accel_iaa.so 00:05:07.934 LIB libspdk_vfu_device.a 00:05:07.934 CC module/bdev/delay/vbdev_delay.o 00:05:07.934 CC module/bdev/lvol/vbdev_lvol.o 00:05:07.934 CC module/bdev/gpt/gpt.o 00:05:07.934 CC module/bdev/error/vbdev_error.o 00:05:08.192 SO libspdk_vfu_device.so.3.0 00:05:08.192 CC module/blobfs/bdev/blobfs_bdev.o 00:05:08.192 CC module/bdev/malloc/bdev_malloc.o 00:05:08.192 CC module/bdev/null/bdev_null.o 00:05:08.192 CC module/bdev/passthru/vbdev_passthru.o 00:05:08.192 CC module/bdev/nvme/bdev_nvme.o 00:05:08.192 SYMLINK libspdk_vfu_device.so 00:05:08.192 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:08.192 CC module/bdev/gpt/vbdev_gpt.o 00:05:08.192 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:08.450 CC module/bdev/error/vbdev_error_rpc.o 00:05:08.450 CC module/bdev/null/bdev_null_rpc.o 00:05:08.450 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:08.450 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:08.450 LIB libspdk_blobfs_bdev.a 00:05:08.450 LIB libspdk_bdev_passthru.a 00:05:08.450 SO libspdk_blobfs_bdev.so.6.0 00:05:08.450 SO libspdk_bdev_passthru.so.6.0 00:05:08.450 LIB libspdk_bdev_error.a 00:05:08.450 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:08.450 LIB libspdk_bdev_null.a 00:05:08.450 SO libspdk_bdev_error.so.6.0 00:05:08.450 LIB libspdk_bdev_gpt.a 00:05:08.451 SYMLINK libspdk_bdev_passthru.so 00:05:08.709 SYMLINK libspdk_blobfs_bdev.so 00:05:08.709 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:08.709 CC module/bdev/nvme/nvme_rpc.o 00:05:08.709 SO libspdk_bdev_null.so.6.0 00:05:08.709 SO libspdk_bdev_gpt.so.6.0 00:05:08.709 SYMLINK libspdk_bdev_error.so 00:05:08.709 LIB libspdk_bdev_delay.a 00:05:08.709 CC module/bdev/nvme/bdev_mdns_client.o 00:05:08.709 CC module/bdev/nvme/vbdev_opal.o 00:05:08.709 LIB libspdk_bdev_malloc.a 00:05:08.709 SYMLINK libspdk_bdev_null.so 00:05:08.709 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:08.709 SO libspdk_bdev_delay.so.6.0 00:05:08.709 SYMLINK libspdk_bdev_gpt.so 00:05:08.709 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:08.709 SO libspdk_bdev_malloc.so.6.0 00:05:08.709 SYMLINK libspdk_bdev_delay.so 00:05:08.709 SYMLINK libspdk_bdev_malloc.so 00:05:08.967 CC module/bdev/split/vbdev_split.o 00:05:08.967 CC module/bdev/raid/bdev_raid.o 00:05:08.967 LIB libspdk_bdev_lvol.a 00:05:08.967 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:08.967 SO libspdk_bdev_lvol.so.6.0 00:05:08.967 CC module/bdev/uring/bdev_uring.o 00:05:09.226 CC module/bdev/aio/bdev_aio.o 00:05:09.226 SYMLINK libspdk_bdev_lvol.so 00:05:09.226 CC module/bdev/ftl/bdev_ftl.o 00:05:09.226 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:09.226 CC module/bdev/iscsi/bdev_iscsi.o 00:05:09.226 CC module/bdev/split/vbdev_split_rpc.o 00:05:09.226 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:09.485 LIB libspdk_bdev_split.a 00:05:09.485 CC module/bdev/raid/bdev_raid_rpc.o 00:05:09.485 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:09.485 CC module/bdev/raid/bdev_raid_sb.o 00:05:09.485 SO libspdk_bdev_split.so.6.0 00:05:09.485 CC module/bdev/aio/bdev_aio_rpc.o 00:05:09.485 SYMLINK libspdk_bdev_split.so 00:05:09.485 CC module/bdev/raid/raid0.o 00:05:09.485 LIB libspdk_bdev_ftl.a 00:05:09.485 CC module/bdev/uring/bdev_uring_rpc.o 00:05:09.485 LIB libspdk_bdev_iscsi.a 00:05:09.485 SO libspdk_bdev_ftl.so.6.0 00:05:09.743 LIB libspdk_bdev_zone_block.a 00:05:09.743 SO libspdk_bdev_iscsi.so.6.0 00:05:09.743 SO libspdk_bdev_zone_block.so.6.0 00:05:09.743 SYMLINK libspdk_bdev_ftl.so 00:05:09.743 CC module/bdev/raid/raid1.o 00:05:09.743 CC module/bdev/raid/concat.o 00:05:09.743 LIB libspdk_bdev_aio.a 00:05:09.743 SYMLINK libspdk_bdev_iscsi.so 00:05:09.743 SO libspdk_bdev_aio.so.6.0 00:05:09.743 SYMLINK libspdk_bdev_zone_block.so 00:05:09.743 LIB libspdk_bdev_uring.a 00:05:09.743 SO libspdk_bdev_uring.so.6.0 00:05:09.743 SYMLINK libspdk_bdev_aio.so 00:05:09.743 SYMLINK libspdk_bdev_uring.so 00:05:10.002 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:10.002 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:10.002 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:10.261 LIB libspdk_bdev_raid.a 00:05:10.261 SO libspdk_bdev_raid.so.6.0 00:05:10.261 SYMLINK libspdk_bdev_raid.so 00:05:10.520 LIB libspdk_bdev_virtio.a 00:05:10.520 SO libspdk_bdev_virtio.so.6.0 00:05:10.520 SYMLINK libspdk_bdev_virtio.so 00:05:10.779 LIB libspdk_bdev_nvme.a 00:05:10.779 SO libspdk_bdev_nvme.so.7.0 00:05:10.779 SYMLINK libspdk_bdev_nvme.so 00:05:11.348 CC module/event/subsystems/iobuf/iobuf.o 00:05:11.348 CC module/event/subsystems/keyring/keyring.o 00:05:11.348 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:11.348 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:05:11.348 CC module/event/subsystems/vmd/vmd.o 00:05:11.348 CC module/event/subsystems/fsdev/fsdev.o 00:05:11.348 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:11.348 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:11.348 CC module/event/subsystems/sock/sock.o 00:05:11.348 CC module/event/subsystems/scheduler/scheduler.o 00:05:11.607 LIB libspdk_event_vmd.a 00:05:11.607 SO libspdk_event_vmd.so.6.0 00:05:11.607 LIB libspdk_event_keyring.a 00:05:11.607 LIB libspdk_event_fsdev.a 00:05:11.607 LIB libspdk_event_vhost_blk.a 00:05:11.607 LIB libspdk_event_vfu_tgt.a 00:05:11.607 SO libspdk_event_keyring.so.1.0 00:05:11.607 SYMLINK libspdk_event_vmd.so 00:05:11.607 SO libspdk_event_fsdev.so.1.0 00:05:11.607 LIB libspdk_event_sock.a 00:05:11.607 SO libspdk_event_vhost_blk.so.3.0 00:05:11.607 LIB libspdk_event_iobuf.a 00:05:11.607 SO libspdk_event_vfu_tgt.so.3.0 00:05:11.607 LIB libspdk_event_scheduler.a 00:05:11.607 SO libspdk_event_sock.so.5.0 00:05:11.607 SYMLINK libspdk_event_keyring.so 00:05:11.607 SYMLINK libspdk_event_vhost_blk.so 00:05:11.607 SO libspdk_event_iobuf.so.3.0 00:05:11.607 SO libspdk_event_scheduler.so.4.0 00:05:11.607 SYMLINK libspdk_event_vfu_tgt.so 00:05:11.607 SYMLINK libspdk_event_fsdev.so 00:05:11.607 SYMLINK libspdk_event_sock.so 00:05:11.607 SYMLINK libspdk_event_scheduler.so 00:05:11.607 SYMLINK libspdk_event_iobuf.so 00:05:11.865 CC module/event/subsystems/accel/accel.o 00:05:12.124 LIB libspdk_event_accel.a 00:05:12.124 SO libspdk_event_accel.so.6.0 00:05:12.124 SYMLINK libspdk_event_accel.so 00:05:12.382 CC module/event/subsystems/bdev/bdev.o 00:05:12.640 LIB libspdk_event_bdev.a 00:05:12.640 SO libspdk_event_bdev.so.6.0 00:05:12.899 SYMLINK libspdk_event_bdev.so 00:05:12.899 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:12.899 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:12.899 CC module/event/subsystems/nbd/nbd.o 00:05:12.899 CC module/event/subsystems/scsi/scsi.o 00:05:12.899 CC module/event/subsystems/ublk/ublk.o 00:05:13.157 LIB libspdk_event_nbd.a 00:05:13.157 LIB libspdk_event_ublk.a 00:05:13.157 SO libspdk_event_nbd.so.6.0 00:05:13.157 SO libspdk_event_ublk.so.3.0 00:05:13.157 LIB libspdk_event_scsi.a 00:05:13.157 SO libspdk_event_scsi.so.6.0 00:05:13.157 LIB libspdk_event_nvmf.a 00:05:13.157 SYMLINK libspdk_event_nbd.so 00:05:13.157 SYMLINK libspdk_event_ublk.so 00:05:13.157 SO libspdk_event_nvmf.so.6.0 00:05:13.157 SYMLINK libspdk_event_scsi.so 00:05:13.415 SYMLINK libspdk_event_nvmf.so 00:05:13.415 CC module/event/subsystems/iscsi/iscsi.o 00:05:13.415 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:13.674 LIB libspdk_event_vhost_scsi.a 00:05:13.674 SO libspdk_event_vhost_scsi.so.3.0 00:05:13.674 LIB libspdk_event_iscsi.a 00:05:13.674 SO libspdk_event_iscsi.so.6.0 00:05:13.674 SYMLINK libspdk_event_vhost_scsi.so 00:05:13.934 SYMLINK libspdk_event_iscsi.so 00:05:13.934 SO libspdk.so.6.0 00:05:13.934 SYMLINK libspdk.so 00:05:14.191 CC app/trace_record/trace_record.o 00:05:14.191 TEST_HEADER include/spdk/accel.h 00:05:14.191 TEST_HEADER include/spdk/accel_module.h 00:05:14.191 TEST_HEADER include/spdk/assert.h 00:05:14.191 TEST_HEADER include/spdk/barrier.h 00:05:14.191 CXX app/trace/trace.o 00:05:14.191 TEST_HEADER include/spdk/base64.h 00:05:14.191 TEST_HEADER include/spdk/bdev.h 00:05:14.191 CC app/nvmf_tgt/nvmf_main.o 00:05:14.191 TEST_HEADER include/spdk/bdev_module.h 00:05:14.191 TEST_HEADER include/spdk/bdev_zone.h 00:05:14.191 TEST_HEADER include/spdk/bit_array.h 00:05:14.191 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:14.191 TEST_HEADER include/spdk/bit_pool.h 00:05:14.191 TEST_HEADER include/spdk/blob_bdev.h 00:05:14.191 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:14.191 TEST_HEADER include/spdk/blobfs.h 00:05:14.191 TEST_HEADER include/spdk/blob.h 00:05:14.191 TEST_HEADER include/spdk/conf.h 00:05:14.191 TEST_HEADER include/spdk/config.h 00:05:14.191 TEST_HEADER include/spdk/cpuset.h 00:05:14.191 TEST_HEADER include/spdk/crc16.h 00:05:14.191 TEST_HEADER include/spdk/crc32.h 00:05:14.191 TEST_HEADER include/spdk/crc64.h 00:05:14.191 TEST_HEADER include/spdk/dif.h 00:05:14.191 TEST_HEADER include/spdk/dma.h 00:05:14.191 TEST_HEADER include/spdk/endian.h 00:05:14.191 TEST_HEADER include/spdk/env_dpdk.h 00:05:14.191 TEST_HEADER include/spdk/env.h 00:05:14.191 TEST_HEADER include/spdk/event.h 00:05:14.191 TEST_HEADER include/spdk/fd_group.h 00:05:14.191 TEST_HEADER include/spdk/fd.h 00:05:14.191 TEST_HEADER include/spdk/file.h 00:05:14.450 TEST_HEADER include/spdk/fsdev.h 00:05:14.450 TEST_HEADER include/spdk/fsdev_module.h 00:05:14.450 TEST_HEADER include/spdk/ftl.h 00:05:14.450 CC examples/ioat/perf/perf.o 00:05:14.450 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:14.450 TEST_HEADER include/spdk/gpt_spec.h 00:05:14.450 TEST_HEADER include/spdk/hexlify.h 00:05:14.450 CC test/thread/poller_perf/poller_perf.o 00:05:14.450 TEST_HEADER include/spdk/histogram_data.h 00:05:14.450 TEST_HEADER include/spdk/idxd.h 00:05:14.450 TEST_HEADER include/spdk/idxd_spec.h 00:05:14.450 TEST_HEADER include/spdk/init.h 00:05:14.450 TEST_HEADER include/spdk/ioat.h 00:05:14.450 CC examples/util/zipf/zipf.o 00:05:14.450 TEST_HEADER include/spdk/ioat_spec.h 00:05:14.450 TEST_HEADER include/spdk/iscsi_spec.h 00:05:14.450 TEST_HEADER include/spdk/json.h 00:05:14.450 TEST_HEADER include/spdk/jsonrpc.h 00:05:14.450 TEST_HEADER include/spdk/keyring.h 00:05:14.450 TEST_HEADER include/spdk/keyring_module.h 00:05:14.450 CC test/dma/test_dma/test_dma.o 00:05:14.450 TEST_HEADER include/spdk/likely.h 00:05:14.450 TEST_HEADER include/spdk/log.h 00:05:14.450 TEST_HEADER include/spdk/lvol.h 00:05:14.450 TEST_HEADER include/spdk/md5.h 00:05:14.450 TEST_HEADER include/spdk/memory.h 00:05:14.450 TEST_HEADER include/spdk/mmio.h 00:05:14.450 TEST_HEADER include/spdk/nbd.h 00:05:14.450 TEST_HEADER include/spdk/net.h 00:05:14.450 CC test/app/bdev_svc/bdev_svc.o 00:05:14.450 TEST_HEADER include/spdk/notify.h 00:05:14.450 TEST_HEADER include/spdk/nvme.h 00:05:14.450 TEST_HEADER include/spdk/nvme_intel.h 00:05:14.450 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:14.450 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:14.450 TEST_HEADER include/spdk/nvme_spec.h 00:05:14.450 TEST_HEADER include/spdk/nvme_zns.h 00:05:14.450 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:14.450 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:14.450 TEST_HEADER include/spdk/nvmf.h 00:05:14.450 TEST_HEADER include/spdk/nvmf_spec.h 00:05:14.450 TEST_HEADER include/spdk/nvmf_transport.h 00:05:14.450 TEST_HEADER include/spdk/opal.h 00:05:14.450 TEST_HEADER include/spdk/opal_spec.h 00:05:14.450 TEST_HEADER include/spdk/pci_ids.h 00:05:14.450 TEST_HEADER include/spdk/pipe.h 00:05:14.450 TEST_HEADER include/spdk/queue.h 00:05:14.450 TEST_HEADER include/spdk/reduce.h 00:05:14.450 TEST_HEADER include/spdk/rpc.h 00:05:14.450 TEST_HEADER include/spdk/scheduler.h 00:05:14.450 TEST_HEADER include/spdk/scsi.h 00:05:14.450 TEST_HEADER include/spdk/scsi_spec.h 00:05:14.450 TEST_HEADER include/spdk/sock.h 00:05:14.450 LINK interrupt_tgt 00:05:14.450 TEST_HEADER include/spdk/stdinc.h 00:05:14.450 TEST_HEADER include/spdk/string.h 00:05:14.450 TEST_HEADER include/spdk/thread.h 00:05:14.450 TEST_HEADER include/spdk/trace.h 00:05:14.450 TEST_HEADER include/spdk/trace_parser.h 00:05:14.450 TEST_HEADER include/spdk/tree.h 00:05:14.450 TEST_HEADER include/spdk/ublk.h 00:05:14.450 TEST_HEADER include/spdk/util.h 00:05:14.450 TEST_HEADER include/spdk/uuid.h 00:05:14.450 TEST_HEADER include/spdk/version.h 00:05:14.450 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:14.450 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:14.450 TEST_HEADER include/spdk/vhost.h 00:05:14.450 TEST_HEADER include/spdk/vmd.h 00:05:14.450 TEST_HEADER include/spdk/xor.h 00:05:14.450 TEST_HEADER include/spdk/zipf.h 00:05:14.450 LINK zipf 00:05:14.450 CXX test/cpp_headers/accel.o 00:05:14.450 LINK spdk_trace_record 00:05:14.709 LINK nvmf_tgt 00:05:14.709 LINK poller_perf 00:05:14.709 LINK ioat_perf 00:05:14.709 LINK bdev_svc 00:05:14.709 CXX test/cpp_headers/accel_module.o 00:05:14.709 LINK spdk_trace 00:05:14.709 CXX test/cpp_headers/assert.o 00:05:14.709 CXX test/cpp_headers/barrier.o 00:05:14.967 CC test/rpc_client/rpc_client_test.o 00:05:14.967 CC examples/ioat/verify/verify.o 00:05:14.967 CXX test/cpp_headers/base64.o 00:05:14.967 CC test/app/histogram_perf/histogram_perf.o 00:05:14.967 CC test/env/mem_callbacks/mem_callbacks.o 00:05:14.967 LINK test_dma 00:05:14.967 LINK rpc_client_test 00:05:14.967 CC app/iscsi_tgt/iscsi_tgt.o 00:05:14.967 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:15.226 CC test/event/event_perf/event_perf.o 00:05:15.226 CC examples/thread/thread/thread_ex.o 00:05:15.226 CXX test/cpp_headers/bdev.o 00:05:15.226 LINK verify 00:05:15.226 LINK histogram_perf 00:05:15.226 LINK event_perf 00:05:15.226 CC test/app/jsoncat/jsoncat.o 00:05:15.484 LINK iscsi_tgt 00:05:15.484 CXX test/cpp_headers/bdev_module.o 00:05:15.484 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:15.484 CC test/app/stub/stub.o 00:05:15.484 LINK thread 00:05:15.484 LINK jsoncat 00:05:15.484 CC app/spdk_tgt/spdk_tgt.o 00:05:15.484 LINK nvme_fuzz 00:05:15.484 CC test/event/reactor/reactor.o 00:05:15.743 CXX test/cpp_headers/bdev_zone.o 00:05:15.743 LINK stub 00:05:15.743 CXX test/cpp_headers/bit_array.o 00:05:15.743 LINK mem_callbacks 00:05:15.743 CXX test/cpp_headers/bit_pool.o 00:05:15.743 CXX test/cpp_headers/blob_bdev.o 00:05:15.743 LINK reactor 00:05:15.743 LINK spdk_tgt 00:05:16.001 CXX test/cpp_headers/blobfs_bdev.o 00:05:16.001 CC examples/sock/hello_world/hello_sock.o 00:05:16.001 CC test/env/vtophys/vtophys.o 00:05:16.001 CC test/env/memory/memory_ut.o 00:05:16.001 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:16.001 CC test/env/pci/pci_ut.o 00:05:16.001 CXX test/cpp_headers/blobfs.o 00:05:16.001 CC test/event/reactor_perf/reactor_perf.o 00:05:16.001 LINK vtophys 00:05:16.001 LINK env_dpdk_post_init 00:05:16.001 CXX test/cpp_headers/blob.o 00:05:16.260 CC app/spdk_lspci/spdk_lspci.o 00:05:16.260 LINK hello_sock 00:05:16.260 LINK reactor_perf 00:05:16.260 CXX test/cpp_headers/conf.o 00:05:16.260 CXX test/cpp_headers/config.o 00:05:16.260 LINK spdk_lspci 00:05:16.260 CC test/accel/dif/dif.o 00:05:16.518 LINK pci_ut 00:05:16.518 CC app/spdk_nvme_perf/perf.o 00:05:16.518 CC app/spdk_nvme_identify/identify.o 00:05:16.518 CXX test/cpp_headers/cpuset.o 00:05:16.518 CC examples/vmd/lsvmd/lsvmd.o 00:05:16.518 CC test/event/app_repeat/app_repeat.o 00:05:16.518 CC examples/vmd/led/led.o 00:05:16.776 CXX test/cpp_headers/crc16.o 00:05:16.776 LINK lsvmd 00:05:16.776 LINK app_repeat 00:05:16.776 LINK led 00:05:16.776 CXX test/cpp_headers/crc32.o 00:05:17.035 CC examples/idxd/perf/perf.o 00:05:17.035 CXX test/cpp_headers/crc64.o 00:05:17.035 CC test/event/scheduler/scheduler.o 00:05:17.035 LINK dif 00:05:17.035 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:17.294 CC examples/accel/perf/accel_perf.o 00:05:17.294 LINK iscsi_fuzz 00:05:17.294 LINK memory_ut 00:05:17.294 CXX test/cpp_headers/dif.o 00:05:17.294 LINK idxd_perf 00:05:17.294 LINK spdk_nvme_identify 00:05:17.294 LINK scheduler 00:05:17.294 LINK spdk_nvme_perf 00:05:17.553 LINK hello_fsdev 00:05:17.553 CXX test/cpp_headers/dma.o 00:05:17.553 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:17.553 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:17.553 CC examples/blob/hello_world/hello_blob.o 00:05:17.553 CC examples/blob/cli/blobcli.o 00:05:17.553 CXX test/cpp_headers/endian.o 00:05:17.812 CC app/spdk_nvme_discover/discovery_aer.o 00:05:17.812 LINK accel_perf 00:05:17.812 CC test/blobfs/mkfs/mkfs.o 00:05:17.812 CC test/lvol/esnap/esnap.o 00:05:17.812 CXX test/cpp_headers/env_dpdk.o 00:05:17.812 CC examples/nvme/hello_world/hello_world.o 00:05:17.812 LINK hello_blob 00:05:17.812 CC test/nvme/aer/aer.o 00:05:17.812 LINK spdk_nvme_discover 00:05:17.812 CXX test/cpp_headers/env.o 00:05:18.070 LINK mkfs 00:05:18.070 LINK vhost_fuzz 00:05:18.070 CXX test/cpp_headers/event.o 00:05:18.070 CC test/nvme/reset/reset.o 00:05:18.070 LINK hello_world 00:05:18.070 LINK blobcli 00:05:18.329 CC app/spdk_top/spdk_top.o 00:05:18.329 LINK aer 00:05:18.329 CC test/nvme/sgl/sgl.o 00:05:18.329 CXX test/cpp_headers/fd_group.o 00:05:18.329 CC examples/nvme/reconnect/reconnect.o 00:05:18.329 CXX test/cpp_headers/fd.o 00:05:18.329 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:18.588 LINK reset 00:05:18.588 CC test/bdev/bdevio/bdevio.o 00:05:18.588 LINK sgl 00:05:18.588 CC test/nvme/e2edp/nvme_dp.o 00:05:18.588 CXX test/cpp_headers/file.o 00:05:18.588 CC examples/nvme/arbitration/arbitration.o 00:05:18.849 LINK reconnect 00:05:18.849 CXX test/cpp_headers/fsdev.o 00:05:18.849 CC test/nvme/overhead/overhead.o 00:05:18.849 CC app/vhost/vhost.o 00:05:18.849 LINK nvme_dp 00:05:18.849 LINK bdevio 00:05:19.115 CXX test/cpp_headers/fsdev_module.o 00:05:19.115 LINK nvme_manage 00:05:19.115 CC examples/nvme/hotplug/hotplug.o 00:05:19.115 LINK arbitration 00:05:19.115 LINK overhead 00:05:19.115 LINK vhost 00:05:19.115 CXX test/cpp_headers/ftl.o 00:05:19.115 LINK spdk_top 00:05:19.373 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:19.373 LINK hotplug 00:05:19.373 CC test/nvme/err_injection/err_injection.o 00:05:19.373 CC test/nvme/startup/startup.o 00:05:19.373 CC app/spdk_dd/spdk_dd.o 00:05:19.373 CXX test/cpp_headers/fuse_dispatcher.o 00:05:19.373 LINK cmb_copy 00:05:19.373 CC examples/nvme/abort/abort.o 00:05:19.631 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:19.631 CC app/fio/nvme/fio_plugin.o 00:05:19.631 LINK err_injection 00:05:19.631 LINK startup 00:05:19.631 CXX test/cpp_headers/gpt_spec.o 00:05:19.631 CC app/fio/bdev/fio_plugin.o 00:05:19.631 LINK pmr_persistence 00:05:19.889 CXX test/cpp_headers/hexlify.o 00:05:19.889 CC test/nvme/reserve/reserve.o 00:05:19.889 LINK spdk_dd 00:05:19.889 CC test/nvme/simple_copy/simple_copy.o 00:05:19.889 CC examples/bdev/hello_world/hello_bdev.o 00:05:19.889 LINK abort 00:05:19.889 CXX test/cpp_headers/histogram_data.o 00:05:19.889 CC test/nvme/connect_stress/connect_stress.o 00:05:20.148 LINK reserve 00:05:20.148 LINK spdk_nvme 00:05:20.148 CXX test/cpp_headers/idxd.o 00:05:20.148 LINK spdk_bdev 00:05:20.148 LINK hello_bdev 00:05:20.148 CC test/nvme/boot_partition/boot_partition.o 00:05:20.148 CC test/nvme/compliance/nvme_compliance.o 00:05:20.148 LINK connect_stress 00:05:20.148 LINK simple_copy 00:05:20.148 CXX test/cpp_headers/idxd_spec.o 00:05:20.148 CXX test/cpp_headers/init.o 00:05:20.406 CXX test/cpp_headers/ioat.o 00:05:20.406 LINK boot_partition 00:05:20.406 CXX test/cpp_headers/ioat_spec.o 00:05:20.406 CXX test/cpp_headers/iscsi_spec.o 00:05:20.406 CC test/nvme/fused_ordering/fused_ordering.o 00:05:20.406 CXX test/cpp_headers/json.o 00:05:20.406 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:20.406 CC test/nvme/fdp/fdp.o 00:05:20.406 CC examples/bdev/bdevperf/bdevperf.o 00:05:20.406 LINK nvme_compliance 00:05:20.665 CXX test/cpp_headers/jsonrpc.o 00:05:20.665 CXX test/cpp_headers/keyring.o 00:05:20.665 CC test/nvme/cuse/cuse.o 00:05:20.665 CXX test/cpp_headers/keyring_module.o 00:05:20.665 LINK doorbell_aers 00:05:20.665 CXX test/cpp_headers/likely.o 00:05:20.665 LINK fused_ordering 00:05:20.665 CXX test/cpp_headers/log.o 00:05:20.665 CXX test/cpp_headers/lvol.o 00:05:20.923 CXX test/cpp_headers/md5.o 00:05:20.923 LINK fdp 00:05:20.923 CXX test/cpp_headers/memory.o 00:05:20.923 CXX test/cpp_headers/mmio.o 00:05:20.923 CXX test/cpp_headers/nbd.o 00:05:20.923 CXX test/cpp_headers/net.o 00:05:20.923 CXX test/cpp_headers/notify.o 00:05:20.923 CXX test/cpp_headers/nvme.o 00:05:20.923 CXX test/cpp_headers/nvme_intel.o 00:05:20.923 CXX test/cpp_headers/nvme_ocssd.o 00:05:20.923 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:21.181 CXX test/cpp_headers/nvme_spec.o 00:05:21.181 CXX test/cpp_headers/nvme_zns.o 00:05:21.181 CXX test/cpp_headers/nvmf_cmd.o 00:05:21.181 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:21.181 CXX test/cpp_headers/nvmf.o 00:05:21.181 CXX test/cpp_headers/nvmf_spec.o 00:05:21.181 CXX test/cpp_headers/nvmf_transport.o 00:05:21.181 CXX test/cpp_headers/opal.o 00:05:21.181 CXX test/cpp_headers/opal_spec.o 00:05:21.440 LINK bdevperf 00:05:21.440 CXX test/cpp_headers/pci_ids.o 00:05:21.440 CXX test/cpp_headers/pipe.o 00:05:21.440 CXX test/cpp_headers/queue.o 00:05:21.440 CXX test/cpp_headers/reduce.o 00:05:21.440 CXX test/cpp_headers/rpc.o 00:05:21.440 CXX test/cpp_headers/scheduler.o 00:05:21.440 CXX test/cpp_headers/scsi.o 00:05:21.440 CXX test/cpp_headers/scsi_spec.o 00:05:21.440 CXX test/cpp_headers/sock.o 00:05:21.440 CXX test/cpp_headers/stdinc.o 00:05:21.698 CXX test/cpp_headers/string.o 00:05:21.698 CXX test/cpp_headers/thread.o 00:05:21.698 CXX test/cpp_headers/trace.o 00:05:21.698 CXX test/cpp_headers/trace_parser.o 00:05:21.698 CXX test/cpp_headers/tree.o 00:05:21.698 CXX test/cpp_headers/ublk.o 00:05:21.698 CXX test/cpp_headers/util.o 00:05:21.698 CXX test/cpp_headers/uuid.o 00:05:21.698 CC examples/nvmf/nvmf/nvmf.o 00:05:21.698 CXX test/cpp_headers/version.o 00:05:21.698 CXX test/cpp_headers/vfio_user_pci.o 00:05:21.698 CXX test/cpp_headers/vfio_user_spec.o 00:05:21.698 CXX test/cpp_headers/vhost.o 00:05:21.698 CXX test/cpp_headers/vmd.o 00:05:21.957 CXX test/cpp_headers/xor.o 00:05:21.957 CXX test/cpp_headers/zipf.o 00:05:21.957 LINK cuse 00:05:21.957 LINK nvmf 00:05:22.894 LINK esnap 00:05:23.154 00:05:23.154 real 1m25.998s 00:05:23.154 user 7m9.005s 00:05:23.154 sys 1m11.391s 00:05:23.154 12:25:28 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:05:23.154 ************************************ 00:05:23.154 END TEST make 00:05:23.154 ************************************ 00:05:23.154 12:25:28 make -- common/autotest_common.sh@10 -- $ set +x 00:05:23.414 12:25:28 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:23.414 12:25:28 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:23.414 12:25:28 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:23.414 12:25:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:23.414 12:25:28 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:23.414 12:25:28 -- pm/common@44 -- $ pid=6036 00:05:23.414 12:25:28 -- pm/common@50 -- $ kill -TERM 6036 00:05:23.414 12:25:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:23.414 12:25:28 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:23.414 12:25:28 -- pm/common@44 -- $ pid=6038 00:05:23.414 12:25:28 -- pm/common@50 -- $ kill -TERM 6038 00:05:23.414 12:25:28 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:23.414 12:25:28 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:23.414 12:25:28 -- common/autotest_common.sh@1681 -- # lcov --version 00:05:23.414 12:25:28 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:23.414 12:25:28 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.414 12:25:28 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.414 12:25:28 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.414 12:25:28 -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.414 12:25:28 -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.414 12:25:28 -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.414 12:25:28 -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.414 12:25:28 -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.414 12:25:28 -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.414 12:25:28 -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.414 12:25:28 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.414 12:25:28 -- scripts/common.sh@344 -- # case "$op" in 00:05:23.414 12:25:28 -- scripts/common.sh@345 -- # : 1 00:05:23.414 12:25:28 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.414 12:25:28 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.414 12:25:28 -- scripts/common.sh@365 -- # decimal 1 00:05:23.414 12:25:28 -- scripts/common.sh@353 -- # local d=1 00:05:23.414 12:25:28 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.414 12:25:28 -- scripts/common.sh@355 -- # echo 1 00:05:23.414 12:25:28 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.414 12:25:28 -- scripts/common.sh@366 -- # decimal 2 00:05:23.414 12:25:28 -- scripts/common.sh@353 -- # local d=2 00:05:23.414 12:25:28 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.414 12:25:28 -- scripts/common.sh@355 -- # echo 2 00:05:23.414 12:25:28 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.414 12:25:28 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.415 12:25:28 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.415 12:25:28 -- scripts/common.sh@368 -- # return 0 00:05:23.415 12:25:28 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.415 12:25:28 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:23.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.415 --rc genhtml_branch_coverage=1 00:05:23.415 --rc genhtml_function_coverage=1 00:05:23.415 --rc genhtml_legend=1 00:05:23.415 --rc geninfo_all_blocks=1 00:05:23.415 --rc geninfo_unexecuted_blocks=1 00:05:23.415 00:05:23.415 ' 00:05:23.415 12:25:28 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:23.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.415 --rc genhtml_branch_coverage=1 00:05:23.415 --rc genhtml_function_coverage=1 00:05:23.415 --rc genhtml_legend=1 00:05:23.415 --rc geninfo_all_blocks=1 00:05:23.415 --rc geninfo_unexecuted_blocks=1 00:05:23.415 00:05:23.415 ' 00:05:23.415 12:25:28 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:23.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.415 --rc genhtml_branch_coverage=1 00:05:23.415 --rc genhtml_function_coverage=1 00:05:23.415 --rc genhtml_legend=1 00:05:23.415 --rc geninfo_all_blocks=1 00:05:23.415 --rc geninfo_unexecuted_blocks=1 00:05:23.415 00:05:23.415 ' 00:05:23.415 12:25:28 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:23.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.415 --rc genhtml_branch_coverage=1 00:05:23.415 --rc genhtml_function_coverage=1 00:05:23.415 --rc genhtml_legend=1 00:05:23.415 --rc geninfo_all_blocks=1 00:05:23.415 --rc geninfo_unexecuted_blocks=1 00:05:23.415 00:05:23.415 ' 00:05:23.415 12:25:28 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:23.415 12:25:28 -- nvmf/common.sh@7 -- # uname -s 00:05:23.415 12:25:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:23.415 12:25:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:23.415 12:25:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:23.415 12:25:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:23.415 12:25:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:23.415 12:25:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:23.415 12:25:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:23.415 12:25:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:23.415 12:25:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:23.415 12:25:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:23.415 12:25:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:05:23.415 12:25:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:05:23.415 12:25:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:23.415 12:25:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:23.415 12:25:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:05:23.415 12:25:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:23.415 12:25:28 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:23.415 12:25:28 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:23.415 12:25:28 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:23.415 12:25:28 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:23.415 12:25:28 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:23.415 12:25:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.415 12:25:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.415 12:25:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.415 12:25:28 -- paths/export.sh@5 -- # export PATH 00:05:23.415 12:25:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.415 12:25:28 -- nvmf/common.sh@51 -- # : 0 00:05:23.415 12:25:28 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:23.415 12:25:28 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:23.415 12:25:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:23.415 12:25:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:23.415 12:25:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:23.415 12:25:28 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:23.415 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:23.415 12:25:28 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:23.415 12:25:28 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:23.415 12:25:28 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:23.415 12:25:28 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:23.415 12:25:28 -- spdk/autotest.sh@32 -- # uname -s 00:05:23.415 12:25:28 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:23.415 12:25:28 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:23.415 12:25:28 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:23.674 12:25:28 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:23.675 12:25:28 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:23.675 12:25:28 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:23.675 12:25:28 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:23.675 12:25:28 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:23.675 12:25:28 -- spdk/autotest.sh@48 -- # udevadm_pid=67558 00:05:23.675 12:25:28 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:23.675 12:25:28 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:23.675 12:25:28 -- pm/common@17 -- # local monitor 00:05:23.675 12:25:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:23.675 12:25:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:23.675 12:25:28 -- pm/common@25 -- # sleep 1 00:05:23.675 12:25:28 -- pm/common@21 -- # date +%s 00:05:23.675 12:25:28 -- pm/common@21 -- # date +%s 00:05:23.675 12:25:28 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732019128 00:05:23.675 12:25:28 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732019128 00:05:23.675 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732019128_collect-cpu-load.pm.log 00:05:23.675 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732019128_collect-vmstat.pm.log 00:05:24.611 12:25:29 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:24.611 12:25:29 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:24.611 12:25:29 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:24.611 12:25:29 -- common/autotest_common.sh@10 -- # set +x 00:05:24.611 12:25:29 -- spdk/autotest.sh@59 -- # create_test_list 00:05:24.611 12:25:29 -- common/autotest_common.sh@748 -- # xtrace_disable 00:05:24.611 12:25:29 -- common/autotest_common.sh@10 -- # set +x 00:05:24.611 12:25:29 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:24.611 12:25:29 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:24.611 12:25:29 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:24.611 12:25:29 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:24.611 12:25:29 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:24.611 12:25:29 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:24.611 12:25:29 -- common/autotest_common.sh@1455 -- # uname 00:05:24.611 12:25:29 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:05:24.611 12:25:29 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:24.611 12:25:29 -- common/autotest_common.sh@1475 -- # uname 00:05:24.611 12:25:29 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:05:24.611 12:25:29 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:24.611 12:25:29 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:24.870 lcov: LCOV version 1.15 00:05:24.870 12:25:29 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:42.954 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:42.954 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:57.867 12:26:02 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:57.867 12:26:02 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:57.867 12:26:02 -- common/autotest_common.sh@10 -- # set +x 00:05:57.867 12:26:02 -- spdk/autotest.sh@78 -- # rm -f 00:05:57.867 12:26:02 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:57.867 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:57.867 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:57.867 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:57.867 12:26:03 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:57.867 12:26:03 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:57.867 12:26:03 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:57.868 12:26:03 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:57.868 12:26:03 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:57.868 12:26:03 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:57.868 12:26:03 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:57.868 12:26:03 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:57.868 12:26:03 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:57.868 12:26:03 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:57.868 12:26:03 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:57.868 12:26:03 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:57.868 12:26:03 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:57.868 12:26:03 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:57.868 12:26:03 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:57.868 12:26:03 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:05:57.868 12:26:03 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:05:57.868 12:26:03 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:57.868 12:26:03 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:57.868 12:26:03 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:57.868 12:26:03 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:05:57.868 12:26:03 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:05:57.868 12:26:03 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:57.868 12:26:03 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:57.868 12:26:03 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:57.868 12:26:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:57.868 12:26:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:57.868 12:26:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:57.868 12:26:03 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:57.868 12:26:03 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:58.147 No valid GPT data, bailing 00:05:58.147 12:26:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:58.147 12:26:03 -- scripts/common.sh@394 -- # pt= 00:05:58.147 12:26:03 -- scripts/common.sh@395 -- # return 1 00:05:58.147 12:26:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:58.147 1+0 records in 00:05:58.147 1+0 records out 00:05:58.147 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00386902 s, 271 MB/s 00:05:58.147 12:26:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:58.147 12:26:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:58.147 12:26:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:58.147 12:26:03 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:58.147 12:26:03 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:58.147 No valid GPT data, bailing 00:05:58.147 12:26:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:58.147 12:26:03 -- scripts/common.sh@394 -- # pt= 00:05:58.147 12:26:03 -- scripts/common.sh@395 -- # return 1 00:05:58.147 12:26:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:58.147 1+0 records in 00:05:58.147 1+0 records out 00:05:58.147 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0047927 s, 219 MB/s 00:05:58.147 12:26:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:58.147 12:26:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:58.147 12:26:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:58.147 12:26:03 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:58.147 12:26:03 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:58.147 No valid GPT data, bailing 00:05:58.147 12:26:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:58.147 12:26:03 -- scripts/common.sh@394 -- # pt= 00:05:58.147 12:26:03 -- scripts/common.sh@395 -- # return 1 00:05:58.147 12:26:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:58.147 1+0 records in 00:05:58.147 1+0 records out 00:05:58.147 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00422029 s, 248 MB/s 00:05:58.147 12:26:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:58.147 12:26:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:58.147 12:26:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:58.147 12:26:03 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:58.147 12:26:03 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:58.147 No valid GPT data, bailing 00:05:58.147 12:26:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:58.147 12:26:03 -- scripts/common.sh@394 -- # pt= 00:05:58.147 12:26:03 -- scripts/common.sh@395 -- # return 1 00:05:58.147 12:26:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:58.147 1+0 records in 00:05:58.147 1+0 records out 00:05:58.147 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00415455 s, 252 MB/s 00:05:58.147 12:26:03 -- spdk/autotest.sh@105 -- # sync 00:05:58.147 12:26:03 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:58.147 12:26:03 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:58.147 12:26:03 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:00.051 12:26:05 -- spdk/autotest.sh@111 -- # uname -s 00:06:00.051 12:26:05 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:00.051 12:26:05 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:00.051 12:26:05 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:00.988 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:00.988 Hugepages 00:06:00.988 node hugesize free / total 00:06:00.988 node0 1048576kB 0 / 0 00:06:00.988 node0 2048kB 0 / 0 00:06:00.988 00:06:00.988 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:00.988 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:00.988 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:00.988 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:00.988 12:26:06 -- spdk/autotest.sh@117 -- # uname -s 00:06:00.988 12:26:06 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:00.988 12:26:06 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:00.988 12:26:06 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:01.555 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:01.815 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:01.815 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:01.815 12:26:06 -- common/autotest_common.sh@1515 -- # sleep 1 00:06:02.752 12:26:07 -- common/autotest_common.sh@1516 -- # bdfs=() 00:06:02.752 12:26:07 -- common/autotest_common.sh@1516 -- # local bdfs 00:06:02.752 12:26:07 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:06:02.752 12:26:07 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:06:02.752 12:26:07 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:02.752 12:26:07 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:02.752 12:26:07 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:02.752 12:26:07 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:02.752 12:26:07 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:03.011 12:26:08 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:06:03.011 12:26:08 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:03.011 12:26:08 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:03.270 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:03.270 Waiting for block devices as requested 00:06:03.270 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:03.530 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:03.530 12:26:08 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:03.530 12:26:08 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:03.530 12:26:08 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:06:03.530 12:26:08 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:03.530 12:26:08 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:03.530 12:26:08 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:03.530 12:26:08 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:03.530 12:26:08 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:06:03.530 12:26:08 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:06:03.530 12:26:08 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:06:03.530 12:26:08 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:03.530 12:26:08 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:03.530 12:26:08 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:06:03.530 12:26:08 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:06:03.530 12:26:08 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:03.530 12:26:08 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:03.530 12:26:08 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:06:03.530 12:26:08 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:03.530 12:26:08 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:03.530 12:26:08 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:03.530 12:26:08 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:03.530 12:26:08 -- common/autotest_common.sh@1541 -- # continue 00:06:03.530 12:26:08 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:03.530 12:26:08 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:03.530 12:26:08 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:03.530 12:26:08 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:06:03.530 12:26:08 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:03.530 12:26:08 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:03.530 12:26:08 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:03.530 12:26:08 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:06:03.530 12:26:08 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:06:03.530 12:26:08 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:06:03.530 12:26:08 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:06:03.530 12:26:08 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:03.530 12:26:08 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:03.530 12:26:08 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:06:03.530 12:26:08 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:03.530 12:26:08 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:03.530 12:26:08 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:06:03.530 12:26:08 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:03.530 12:26:08 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:03.530 12:26:08 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:03.530 12:26:08 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:03.530 12:26:08 -- common/autotest_common.sh@1541 -- # continue 00:06:03.530 12:26:08 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:03.530 12:26:08 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:03.530 12:26:08 -- common/autotest_common.sh@10 -- # set +x 00:06:03.530 12:26:08 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:03.530 12:26:08 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:03.530 12:26:08 -- common/autotest_common.sh@10 -- # set +x 00:06:03.530 12:26:08 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:04.099 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:04.358 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:04.358 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:04.358 12:26:09 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:04.358 12:26:09 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:04.358 12:26:09 -- common/autotest_common.sh@10 -- # set +x 00:06:04.358 12:26:09 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:04.358 12:26:09 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:06:04.358 12:26:09 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:06:04.358 12:26:09 -- common/autotest_common.sh@1561 -- # bdfs=() 00:06:04.358 12:26:09 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:06:04.358 12:26:09 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:06:04.358 12:26:09 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:06:04.358 12:26:09 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:06:04.358 12:26:09 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:04.358 12:26:09 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:04.359 12:26:09 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:04.359 12:26:09 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:04.359 12:26:09 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:04.618 12:26:09 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:06:04.618 12:26:09 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:04.618 12:26:09 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:04.618 12:26:09 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:04.618 12:26:09 -- common/autotest_common.sh@1564 -- # device=0x0010 00:06:04.618 12:26:09 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:04.618 12:26:09 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:04.618 12:26:09 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:04.618 12:26:09 -- common/autotest_common.sh@1564 -- # device=0x0010 00:06:04.618 12:26:09 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:04.618 12:26:09 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:06:04.618 12:26:09 -- common/autotest_common.sh@1570 -- # return 0 00:06:04.618 12:26:09 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:06:04.618 12:26:09 -- common/autotest_common.sh@1578 -- # return 0 00:06:04.618 12:26:09 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:04.618 12:26:09 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:04.618 12:26:09 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:04.618 12:26:09 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:04.618 12:26:09 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:04.618 12:26:09 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:04.618 12:26:09 -- common/autotest_common.sh@10 -- # set +x 00:06:04.618 12:26:09 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:06:04.618 12:26:09 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:06:04.618 12:26:09 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:06:04.618 12:26:09 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:04.618 12:26:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.618 12:26:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.618 12:26:09 -- common/autotest_common.sh@10 -- # set +x 00:06:04.618 ************************************ 00:06:04.618 START TEST env 00:06:04.618 ************************************ 00:06:04.618 12:26:09 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:04.618 * Looking for test storage... 00:06:04.618 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:04.618 12:26:09 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:04.618 12:26:09 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:04.618 12:26:09 env -- common/autotest_common.sh@1681 -- # lcov --version 00:06:04.618 12:26:09 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:04.618 12:26:09 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.618 12:26:09 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.618 12:26:09 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.618 12:26:09 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.618 12:26:09 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.618 12:26:09 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.618 12:26:09 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.618 12:26:09 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.618 12:26:09 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.618 12:26:09 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.618 12:26:09 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.618 12:26:09 env -- scripts/common.sh@344 -- # case "$op" in 00:06:04.618 12:26:09 env -- scripts/common.sh@345 -- # : 1 00:06:04.618 12:26:09 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.618 12:26:09 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.618 12:26:09 env -- scripts/common.sh@365 -- # decimal 1 00:06:04.618 12:26:09 env -- scripts/common.sh@353 -- # local d=1 00:06:04.618 12:26:09 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.618 12:26:09 env -- scripts/common.sh@355 -- # echo 1 00:06:04.618 12:26:09 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.618 12:26:09 env -- scripts/common.sh@366 -- # decimal 2 00:06:04.618 12:26:09 env -- scripts/common.sh@353 -- # local d=2 00:06:04.618 12:26:09 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.618 12:26:09 env -- scripts/common.sh@355 -- # echo 2 00:06:04.618 12:26:09 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.618 12:26:09 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.618 12:26:09 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.618 12:26:09 env -- scripts/common.sh@368 -- # return 0 00:06:04.618 12:26:09 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.618 12:26:09 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:04.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.618 --rc genhtml_branch_coverage=1 00:06:04.618 --rc genhtml_function_coverage=1 00:06:04.618 --rc genhtml_legend=1 00:06:04.618 --rc geninfo_all_blocks=1 00:06:04.618 --rc geninfo_unexecuted_blocks=1 00:06:04.618 00:06:04.618 ' 00:06:04.618 12:26:09 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:04.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.618 --rc genhtml_branch_coverage=1 00:06:04.618 --rc genhtml_function_coverage=1 00:06:04.618 --rc genhtml_legend=1 00:06:04.618 --rc geninfo_all_blocks=1 00:06:04.618 --rc geninfo_unexecuted_blocks=1 00:06:04.618 00:06:04.618 ' 00:06:04.618 12:26:09 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:04.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.618 --rc genhtml_branch_coverage=1 00:06:04.618 --rc genhtml_function_coverage=1 00:06:04.618 --rc genhtml_legend=1 00:06:04.618 --rc geninfo_all_blocks=1 00:06:04.618 --rc geninfo_unexecuted_blocks=1 00:06:04.618 00:06:04.618 ' 00:06:04.618 12:26:09 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:04.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.618 --rc genhtml_branch_coverage=1 00:06:04.618 --rc genhtml_function_coverage=1 00:06:04.618 --rc genhtml_legend=1 00:06:04.618 --rc geninfo_all_blocks=1 00:06:04.618 --rc geninfo_unexecuted_blocks=1 00:06:04.618 00:06:04.618 ' 00:06:04.618 12:26:09 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:04.618 12:26:09 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.618 12:26:09 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.618 12:26:09 env -- common/autotest_common.sh@10 -- # set +x 00:06:04.878 ************************************ 00:06:04.878 START TEST env_memory 00:06:04.878 ************************************ 00:06:04.878 12:26:09 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:04.878 00:06:04.878 00:06:04.878 CUnit - A unit testing framework for C - Version 2.1-3 00:06:04.878 http://cunit.sourceforge.net/ 00:06:04.878 00:06:04.878 00:06:04.878 Suite: memory 00:06:04.878 Test: alloc and free memory map ...[2024-11-19 12:26:09.927702] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:04.878 passed 00:06:04.878 Test: mem map translation ...[2024-11-19 12:26:09.958284] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:04.878 [2024-11-19 12:26:09.958326] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:04.878 [2024-11-19 12:26:09.958381] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:04.878 [2024-11-19 12:26:09.958393] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:04.878 passed 00:06:04.878 Test: mem map registration ...[2024-11-19 12:26:10.022060] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:04.878 [2024-11-19 12:26:10.022092] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:04.878 passed 00:06:04.878 Test: mem map adjacent registrations ...passed 00:06:04.878 00:06:04.878 Run Summary: Type Total Ran Passed Failed Inactive 00:06:04.878 suites 1 1 n/a 0 0 00:06:04.878 tests 4 4 4 0 0 00:06:04.878 asserts 152 152 152 0 n/a 00:06:04.878 00:06:04.878 Elapsed time = 0.213 seconds 00:06:04.878 00:06:04.878 real 0m0.229s 00:06:04.878 user 0m0.214s 00:06:04.878 sys 0m0.011s 00:06:04.878 ************************************ 00:06:04.878 END TEST env_memory 00:06:04.878 ************************************ 00:06:04.878 12:26:10 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.878 12:26:10 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:05.137 12:26:10 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:05.137 12:26:10 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.137 12:26:10 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.137 12:26:10 env -- common/autotest_common.sh@10 -- # set +x 00:06:05.137 ************************************ 00:06:05.137 START TEST env_vtophys 00:06:05.137 ************************************ 00:06:05.137 12:26:10 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:05.137 EAL: lib.eal log level changed from notice to debug 00:06:05.137 EAL: Detected lcore 0 as core 0 on socket 0 00:06:05.137 EAL: Detected lcore 1 as core 0 on socket 0 00:06:05.137 EAL: Detected lcore 2 as core 0 on socket 0 00:06:05.137 EAL: Detected lcore 3 as core 0 on socket 0 00:06:05.137 EAL: Detected lcore 4 as core 0 on socket 0 00:06:05.137 EAL: Detected lcore 5 as core 0 on socket 0 00:06:05.137 EAL: Detected lcore 6 as core 0 on socket 0 00:06:05.137 EAL: Detected lcore 7 as core 0 on socket 0 00:06:05.137 EAL: Detected lcore 8 as core 0 on socket 0 00:06:05.138 EAL: Detected lcore 9 as core 0 on socket 0 00:06:05.138 EAL: Maximum logical cores by configuration: 128 00:06:05.138 EAL: Detected CPU lcores: 10 00:06:05.138 EAL: Detected NUMA nodes: 1 00:06:05.138 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:06:05.138 EAL: Detected shared linkage of DPDK 00:06:05.138 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:06:05.138 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:06:05.138 EAL: Registered [vdev] bus. 00:06:05.138 EAL: bus.vdev log level changed from disabled to notice 00:06:05.138 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:06:05.138 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:06:05.138 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:05.138 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:05.138 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:06:05.138 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:06:05.138 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:06:05.138 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:06:05.138 EAL: No shared files mode enabled, IPC will be disabled 00:06:05.138 EAL: No shared files mode enabled, IPC is disabled 00:06:05.138 EAL: Selected IOVA mode 'PA' 00:06:05.138 EAL: Probing VFIO support... 00:06:05.138 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:05.138 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:05.138 EAL: Ask a virtual area of 0x2e000 bytes 00:06:05.138 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:05.138 EAL: Setting up physically contiguous memory... 00:06:05.138 EAL: Setting maximum number of open files to 524288 00:06:05.138 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:05.138 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:05.138 EAL: Ask a virtual area of 0x61000 bytes 00:06:05.138 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:05.138 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:05.138 EAL: Ask a virtual area of 0x400000000 bytes 00:06:05.138 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:05.138 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:05.138 EAL: Ask a virtual area of 0x61000 bytes 00:06:05.138 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:05.138 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:05.138 EAL: Ask a virtual area of 0x400000000 bytes 00:06:05.138 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:05.138 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:05.138 EAL: Ask a virtual area of 0x61000 bytes 00:06:05.138 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:05.138 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:05.138 EAL: Ask a virtual area of 0x400000000 bytes 00:06:05.138 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:05.138 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:05.138 EAL: Ask a virtual area of 0x61000 bytes 00:06:05.138 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:05.138 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:05.138 EAL: Ask a virtual area of 0x400000000 bytes 00:06:05.138 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:05.138 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:05.138 EAL: Hugepages will be freed exactly as allocated. 00:06:05.138 EAL: No shared files mode enabled, IPC is disabled 00:06:05.138 EAL: No shared files mode enabled, IPC is disabled 00:06:05.138 EAL: TSC frequency is ~2200000 KHz 00:06:05.138 EAL: Main lcore 0 is ready (tid=7fd907314a00;cpuset=[0]) 00:06:05.138 EAL: Trying to obtain current memory policy. 00:06:05.138 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:05.138 EAL: Restoring previous memory policy: 0 00:06:05.138 EAL: request: mp_malloc_sync 00:06:05.138 EAL: No shared files mode enabled, IPC is disabled 00:06:05.138 EAL: Heap on socket 0 was expanded by 2MB 00:06:05.138 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:05.138 EAL: No shared files mode enabled, IPC is disabled 00:06:05.138 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:05.138 EAL: Mem event callback 'spdk:(nil)' registered 00:06:05.138 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:05.138 00:06:05.138 00:06:05.138 CUnit - A unit testing framework for C - Version 2.1-3 00:06:05.138 http://cunit.sourceforge.net/ 00:06:05.138 00:06:05.138 00:06:05.138 Suite: components_suite 00:06:05.138 Test: vtophys_malloc_test ...passed 00:06:05.138 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:05.138 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:05.138 EAL: Restoring previous memory policy: 4 00:06:05.138 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.138 EAL: request: mp_malloc_sync 00:06:05.138 EAL: No shared files mode enabled, IPC is disabled 00:06:05.138 EAL: Heap on socket 0 was expanded by 4MB 00:06:05.138 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.138 EAL: request: mp_malloc_sync 00:06:05.138 EAL: No shared files mode enabled, IPC is disabled 00:06:05.138 EAL: Heap on socket 0 was shrunk by 4MB 00:06:05.138 EAL: Trying to obtain current memory policy. 00:06:05.138 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:05.138 EAL: Restoring previous memory policy: 4 00:06:05.138 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.138 EAL: request: mp_malloc_sync 00:06:05.138 EAL: No shared files mode enabled, IPC is disabled 00:06:05.138 EAL: Heap on socket 0 was expanded by 6MB 00:06:05.138 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.138 EAL: request: mp_malloc_sync 00:06:05.138 EAL: No shared files mode enabled, IPC is disabled 00:06:05.138 EAL: Heap on socket 0 was shrunk by 6MB 00:06:05.138 EAL: Trying to obtain current memory policy. 00:06:05.138 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:05.138 EAL: Restoring previous memory policy: 4 00:06:05.138 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.138 EAL: request: mp_malloc_sync 00:06:05.138 EAL: No shared files mode enabled, IPC is disabled 00:06:05.138 EAL: Heap on socket 0 was expanded by 10MB 00:06:05.138 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.138 EAL: request: mp_malloc_sync 00:06:05.138 EAL: No shared files mode enabled, IPC is disabled 00:06:05.138 EAL: Heap on socket 0 was shrunk by 10MB 00:06:05.138 EAL: Trying to obtain current memory policy. 00:06:05.138 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:05.138 EAL: Restoring previous memory policy: 4 00:06:05.138 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.138 EAL: request: mp_malloc_sync 00:06:05.138 EAL: No shared files mode enabled, IPC is disabled 00:06:05.138 EAL: Heap on socket 0 was expanded by 18MB 00:06:05.138 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.138 EAL: request: mp_malloc_sync 00:06:05.138 EAL: No shared files mode enabled, IPC is disabled 00:06:05.138 EAL: Heap on socket 0 was shrunk by 18MB 00:06:05.138 EAL: Trying to obtain current memory policy. 00:06:05.138 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:05.138 EAL: Restoring previous memory policy: 4 00:06:05.138 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.138 EAL: request: mp_malloc_sync 00:06:05.138 EAL: No shared files mode enabled, IPC is disabled 00:06:05.138 EAL: Heap on socket 0 was expanded by 34MB 00:06:05.138 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.138 EAL: request: mp_malloc_sync 00:06:05.138 EAL: No shared files mode enabled, IPC is disabled 00:06:05.138 EAL: Heap on socket 0 was shrunk by 34MB 00:06:05.138 EAL: Trying to obtain current memory policy. 00:06:05.138 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:05.138 EAL: Restoring previous memory policy: 4 00:06:05.138 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.138 EAL: request: mp_malloc_sync 00:06:05.138 EAL: No shared files mode enabled, IPC is disabled 00:06:05.138 EAL: Heap on socket 0 was expanded by 66MB 00:06:05.138 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.138 EAL: request: mp_malloc_sync 00:06:05.138 EAL: No shared files mode enabled, IPC is disabled 00:06:05.138 EAL: Heap on socket 0 was shrunk by 66MB 00:06:05.138 EAL: Trying to obtain current memory policy. 00:06:05.138 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:05.138 EAL: Restoring previous memory policy: 4 00:06:05.138 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.138 EAL: request: mp_malloc_sync 00:06:05.138 EAL: No shared files mode enabled, IPC is disabled 00:06:05.138 EAL: Heap on socket 0 was expanded by 130MB 00:06:05.138 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.398 EAL: request: mp_malloc_sync 00:06:05.398 EAL: No shared files mode enabled, IPC is disabled 00:06:05.398 EAL: Heap on socket 0 was shrunk by 130MB 00:06:05.398 EAL: Trying to obtain current memory policy. 00:06:05.398 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:05.398 EAL: Restoring previous memory policy: 4 00:06:05.398 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.398 EAL: request: mp_malloc_sync 00:06:05.398 EAL: No shared files mode enabled, IPC is disabled 00:06:05.398 EAL: Heap on socket 0 was expanded by 258MB 00:06:05.398 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.398 EAL: request: mp_malloc_sync 00:06:05.398 EAL: No shared files mode enabled, IPC is disabled 00:06:05.398 EAL: Heap on socket 0 was shrunk by 258MB 00:06:05.398 EAL: Trying to obtain current memory policy. 00:06:05.398 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:05.398 EAL: Restoring previous memory policy: 4 00:06:05.398 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.398 EAL: request: mp_malloc_sync 00:06:05.398 EAL: No shared files mode enabled, IPC is disabled 00:06:05.398 EAL: Heap on socket 0 was expanded by 514MB 00:06:05.398 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.657 EAL: request: mp_malloc_sync 00:06:05.657 EAL: No shared files mode enabled, IPC is disabled 00:06:05.657 EAL: Heap on socket 0 was shrunk by 514MB 00:06:05.657 EAL: Trying to obtain current memory policy. 00:06:05.657 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:05.657 EAL: Restoring previous memory policy: 4 00:06:05.657 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.657 EAL: request: mp_malloc_sync 00:06:05.657 EAL: No shared files mode enabled, IPC is disabled 00:06:05.657 EAL: Heap on socket 0 was expanded by 1026MB 00:06:05.916 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.916 passed 00:06:05.916 00:06:05.916 Run Summary: Type Total Ran Passed Failed Inactive 00:06:05.916 suites 1 1 n/a 0 0 00:06:05.916 tests 2 2 2 0 0 00:06:05.916 asserts 5449 5449 5449 0 n/a 00:06:05.916 00:06:05.916 Elapsed time = 0.701 seconds 00:06:05.916 EAL: request: mp_malloc_sync 00:06:05.916 EAL: No shared files mode enabled, IPC is disabled 00:06:05.916 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:05.916 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.916 EAL: request: mp_malloc_sync 00:06:05.916 EAL: No shared files mode enabled, IPC is disabled 00:06:05.916 EAL: Heap on socket 0 was shrunk by 2MB 00:06:05.916 EAL: No shared files mode enabled, IPC is disabled 00:06:05.916 EAL: No shared files mode enabled, IPC is disabled 00:06:05.916 EAL: No shared files mode enabled, IPC is disabled 00:06:05.916 ************************************ 00:06:05.916 END TEST env_vtophys 00:06:05.916 ************************************ 00:06:05.916 00:06:05.916 real 0m0.899s 00:06:05.916 user 0m0.457s 00:06:05.916 sys 0m0.309s 00:06:05.916 12:26:11 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.916 12:26:11 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:05.917 12:26:11 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:05.917 12:26:11 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.917 12:26:11 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.917 12:26:11 env -- common/autotest_common.sh@10 -- # set +x 00:06:05.917 ************************************ 00:06:05.917 START TEST env_pci 00:06:05.917 ************************************ 00:06:05.917 12:26:11 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:05.917 00:06:05.917 00:06:05.917 CUnit - A unit testing framework for C - Version 2.1-3 00:06:05.917 http://cunit.sourceforge.net/ 00:06:05.917 00:06:05.917 00:06:05.917 Suite: pci 00:06:05.917 Test: pci_hook ...[2024-11-19 12:26:11.117331] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 69822 has claimed it 00:06:05.917 passed 00:06:05.917 00:06:05.917 Run Summary: Type Total Ran Passed Failed Inactive 00:06:05.917 suites 1 1 n/a 0 0 00:06:05.917 tests 1 1 1 0 0 00:06:05.917 asserts 25 25 25 0 n/a 00:06:05.917 00:06:05.917 Elapsed time = 0.002 seconds 00:06:05.917 EAL: Cannot find device (10000:00:01.0) 00:06:05.917 EAL: Failed to attach device on primary process 00:06:05.917 ************************************ 00:06:05.917 END TEST env_pci 00:06:05.917 ************************************ 00:06:05.917 00:06:05.917 real 0m0.019s 00:06:05.917 user 0m0.009s 00:06:05.917 sys 0m0.010s 00:06:05.917 12:26:11 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.917 12:26:11 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:05.917 12:26:11 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:05.917 12:26:11 env -- env/env.sh@15 -- # uname 00:06:05.917 12:26:11 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:05.917 12:26:11 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:05.917 12:26:11 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:05.917 12:26:11 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:05.917 12:26:11 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.917 12:26:11 env -- common/autotest_common.sh@10 -- # set +x 00:06:06.176 ************************************ 00:06:06.176 START TEST env_dpdk_post_init 00:06:06.176 ************************************ 00:06:06.176 12:26:11 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:06.176 EAL: Detected CPU lcores: 10 00:06:06.176 EAL: Detected NUMA nodes: 1 00:06:06.176 EAL: Detected shared linkage of DPDK 00:06:06.176 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:06.176 EAL: Selected IOVA mode 'PA' 00:06:06.176 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:06.176 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:06.176 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:06.176 Starting DPDK initialization... 00:06:06.176 Starting SPDK post initialization... 00:06:06.176 SPDK NVMe probe 00:06:06.176 Attaching to 0000:00:10.0 00:06:06.176 Attaching to 0000:00:11.0 00:06:06.176 Attached to 0000:00:10.0 00:06:06.176 Attached to 0000:00:11.0 00:06:06.176 Cleaning up... 00:06:06.176 00:06:06.176 real 0m0.171s 00:06:06.176 user 0m0.045s 00:06:06.176 sys 0m0.026s 00:06:06.176 12:26:11 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.176 12:26:11 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:06.176 ************************************ 00:06:06.176 END TEST env_dpdk_post_init 00:06:06.176 ************************************ 00:06:06.176 12:26:11 env -- env/env.sh@26 -- # uname 00:06:06.176 12:26:11 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:06.176 12:26:11 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:06.176 12:26:11 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.176 12:26:11 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.176 12:26:11 env -- common/autotest_common.sh@10 -- # set +x 00:06:06.176 ************************************ 00:06:06.176 START TEST env_mem_callbacks 00:06:06.176 ************************************ 00:06:06.176 12:26:11 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:06.176 EAL: Detected CPU lcores: 10 00:06:06.176 EAL: Detected NUMA nodes: 1 00:06:06.176 EAL: Detected shared linkage of DPDK 00:06:06.176 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:06.435 EAL: Selected IOVA mode 'PA' 00:06:06.435 00:06:06.435 00:06:06.435 CUnit - A unit testing framework for C - Version 2.1-3 00:06:06.435 http://cunit.sourceforge.net/ 00:06:06.435 00:06:06.435 00:06:06.436 Suite: memory 00:06:06.436 Test: test ... 00:06:06.436 register 0x200000200000 2097152 00:06:06.436 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:06.436 malloc 3145728 00:06:06.436 register 0x200000400000 4194304 00:06:06.436 buf 0x200000500000 len 3145728 PASSED 00:06:06.436 malloc 64 00:06:06.436 buf 0x2000004fff40 len 64 PASSED 00:06:06.436 malloc 4194304 00:06:06.436 register 0x200000800000 6291456 00:06:06.436 buf 0x200000a00000 len 4194304 PASSED 00:06:06.436 free 0x200000500000 3145728 00:06:06.436 free 0x2000004fff40 64 00:06:06.436 unregister 0x200000400000 4194304 PASSED 00:06:06.436 free 0x200000a00000 4194304 00:06:06.436 unregister 0x200000800000 6291456 PASSED 00:06:06.436 malloc 8388608 00:06:06.436 register 0x200000400000 10485760 00:06:06.436 buf 0x200000600000 len 8388608 PASSED 00:06:06.436 free 0x200000600000 8388608 00:06:06.436 unregister 0x200000400000 10485760 PASSED 00:06:06.436 passed 00:06:06.436 00:06:06.436 Run Summary: Type Total Ran Passed Failed Inactive 00:06:06.436 suites 1 1 n/a 0 0 00:06:06.436 tests 1 1 1 0 0 00:06:06.436 asserts 15 15 15 0 n/a 00:06:06.436 00:06:06.436 Elapsed time = 0.006 seconds 00:06:06.436 00:06:06.436 real 0m0.139s 00:06:06.436 user 0m0.018s 00:06:06.436 sys 0m0.020s 00:06:06.436 12:26:11 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.436 12:26:11 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:06.436 ************************************ 00:06:06.436 END TEST env_mem_callbacks 00:06:06.436 ************************************ 00:06:06.436 00:06:06.436 real 0m1.908s 00:06:06.436 user 0m0.937s 00:06:06.436 sys 0m0.616s 00:06:06.436 12:26:11 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.436 ************************************ 00:06:06.436 END TEST env 00:06:06.436 ************************************ 00:06:06.436 12:26:11 env -- common/autotest_common.sh@10 -- # set +x 00:06:06.436 12:26:11 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:06.436 12:26:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.436 12:26:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.436 12:26:11 -- common/autotest_common.sh@10 -- # set +x 00:06:06.436 ************************************ 00:06:06.436 START TEST rpc 00:06:06.436 ************************************ 00:06:06.436 12:26:11 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:06.695 * Looking for test storage... 00:06:06.695 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:06.695 12:26:11 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:06.695 12:26:11 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:06.695 12:26:11 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:06.695 12:26:11 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:06.695 12:26:11 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.695 12:26:11 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.695 12:26:11 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.695 12:26:11 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.695 12:26:11 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.695 12:26:11 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.695 12:26:11 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.695 12:26:11 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.695 12:26:11 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.695 12:26:11 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.695 12:26:11 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.695 12:26:11 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:06.695 12:26:11 rpc -- scripts/common.sh@345 -- # : 1 00:06:06.695 12:26:11 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.695 12:26:11 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.695 12:26:11 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:06.695 12:26:11 rpc -- scripts/common.sh@353 -- # local d=1 00:06:06.695 12:26:11 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.695 12:26:11 rpc -- scripts/common.sh@355 -- # echo 1 00:06:06.695 12:26:11 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.695 12:26:11 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:06.695 12:26:11 rpc -- scripts/common.sh@353 -- # local d=2 00:06:06.695 12:26:11 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.695 12:26:11 rpc -- scripts/common.sh@355 -- # echo 2 00:06:06.695 12:26:11 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.695 12:26:11 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.695 12:26:11 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.695 12:26:11 rpc -- scripts/common.sh@368 -- # return 0 00:06:06.695 12:26:11 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.695 12:26:11 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:06.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.695 --rc genhtml_branch_coverage=1 00:06:06.695 --rc genhtml_function_coverage=1 00:06:06.695 --rc genhtml_legend=1 00:06:06.695 --rc geninfo_all_blocks=1 00:06:06.695 --rc geninfo_unexecuted_blocks=1 00:06:06.695 00:06:06.695 ' 00:06:06.695 12:26:11 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:06.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.695 --rc genhtml_branch_coverage=1 00:06:06.695 --rc genhtml_function_coverage=1 00:06:06.695 --rc genhtml_legend=1 00:06:06.695 --rc geninfo_all_blocks=1 00:06:06.695 --rc geninfo_unexecuted_blocks=1 00:06:06.695 00:06:06.695 ' 00:06:06.695 12:26:11 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:06.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.695 --rc genhtml_branch_coverage=1 00:06:06.695 --rc genhtml_function_coverage=1 00:06:06.695 --rc genhtml_legend=1 00:06:06.695 --rc geninfo_all_blocks=1 00:06:06.695 --rc geninfo_unexecuted_blocks=1 00:06:06.695 00:06:06.695 ' 00:06:06.695 12:26:11 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:06.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.695 --rc genhtml_branch_coverage=1 00:06:06.695 --rc genhtml_function_coverage=1 00:06:06.695 --rc genhtml_legend=1 00:06:06.695 --rc geninfo_all_blocks=1 00:06:06.695 --rc geninfo_unexecuted_blocks=1 00:06:06.695 00:06:06.695 ' 00:06:06.695 12:26:11 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69940 00:06:06.695 12:26:11 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:06.695 12:26:11 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:06.695 12:26:11 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69940 00:06:06.695 12:26:11 rpc -- common/autotest_common.sh@831 -- # '[' -z 69940 ']' 00:06:06.695 12:26:11 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.695 12:26:11 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:06.695 12:26:11 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.695 12:26:11 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:06.695 12:26:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.695 [2024-11-19 12:26:11.908879] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:06.695 [2024-11-19 12:26:11.908988] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69940 ] 00:06:06.954 [2024-11-19 12:26:12.050535] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.954 [2024-11-19 12:26:12.093589] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:06.954 [2024-11-19 12:26:12.093640] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69940' to capture a snapshot of events at runtime. 00:06:06.954 [2024-11-19 12:26:12.093654] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:06.954 [2024-11-19 12:26:12.093692] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:06.954 [2024-11-19 12:26:12.093709] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69940 for offline analysis/debug. 00:06:06.954 [2024-11-19 12:26:12.093768] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.954 [2024-11-19 12:26:12.136997] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:07.214 12:26:12 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:07.214 12:26:12 rpc -- common/autotest_common.sh@864 -- # return 0 00:06:07.214 12:26:12 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:07.214 12:26:12 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:07.214 12:26:12 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:07.214 12:26:12 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:07.214 12:26:12 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.214 12:26:12 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.214 12:26:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.214 ************************************ 00:06:07.214 START TEST rpc_integrity 00:06:07.214 ************************************ 00:06:07.214 12:26:12 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:07.214 12:26:12 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:07.214 12:26:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.214 12:26:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.214 12:26:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.214 12:26:12 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:07.214 12:26:12 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:07.214 12:26:12 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:07.214 12:26:12 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:07.214 12:26:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.214 12:26:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.214 12:26:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.214 12:26:12 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:07.214 12:26:12 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:07.214 12:26:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.214 12:26:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.214 12:26:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.214 12:26:12 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:07.214 { 00:06:07.214 "name": "Malloc0", 00:06:07.214 "aliases": [ 00:06:07.214 "ba93d4b6-0009-4553-8c37-ed1a2e9f4078" 00:06:07.214 ], 00:06:07.214 "product_name": "Malloc disk", 00:06:07.214 "block_size": 512, 00:06:07.214 "num_blocks": 16384, 00:06:07.214 "uuid": "ba93d4b6-0009-4553-8c37-ed1a2e9f4078", 00:06:07.214 "assigned_rate_limits": { 00:06:07.214 "rw_ios_per_sec": 0, 00:06:07.214 "rw_mbytes_per_sec": 0, 00:06:07.214 "r_mbytes_per_sec": 0, 00:06:07.214 "w_mbytes_per_sec": 0 00:06:07.214 }, 00:06:07.214 "claimed": false, 00:06:07.214 "zoned": false, 00:06:07.214 "supported_io_types": { 00:06:07.214 "read": true, 00:06:07.214 "write": true, 00:06:07.214 "unmap": true, 00:06:07.214 "flush": true, 00:06:07.214 "reset": true, 00:06:07.214 "nvme_admin": false, 00:06:07.214 "nvme_io": false, 00:06:07.214 "nvme_io_md": false, 00:06:07.214 "write_zeroes": true, 00:06:07.214 "zcopy": true, 00:06:07.214 "get_zone_info": false, 00:06:07.214 "zone_management": false, 00:06:07.214 "zone_append": false, 00:06:07.214 "compare": false, 00:06:07.214 "compare_and_write": false, 00:06:07.214 "abort": true, 00:06:07.214 "seek_hole": false, 00:06:07.214 "seek_data": false, 00:06:07.214 "copy": true, 00:06:07.214 "nvme_iov_md": false 00:06:07.214 }, 00:06:07.214 "memory_domains": [ 00:06:07.214 { 00:06:07.214 "dma_device_id": "system", 00:06:07.214 "dma_device_type": 1 00:06:07.214 }, 00:06:07.214 { 00:06:07.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:07.214 "dma_device_type": 2 00:06:07.214 } 00:06:07.214 ], 00:06:07.214 "driver_specific": {} 00:06:07.214 } 00:06:07.214 ]' 00:06:07.214 12:26:12 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:07.214 12:26:12 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:07.214 12:26:12 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:07.214 12:26:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.214 12:26:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.214 [2024-11-19 12:26:12.443188] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:07.214 [2024-11-19 12:26:12.443379] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:07.214 [2024-11-19 12:26:12.443406] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f61030 00:06:07.214 [2024-11-19 12:26:12.443415] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:07.214 [2024-11-19 12:26:12.445033] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:07.214 [2024-11-19 12:26:12.445095] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:07.214 Passthru0 00:06:07.214 12:26:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.214 12:26:12 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:07.214 12:26:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.214 12:26:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.473 12:26:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.473 12:26:12 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:07.473 { 00:06:07.473 "name": "Malloc0", 00:06:07.473 "aliases": [ 00:06:07.473 "ba93d4b6-0009-4553-8c37-ed1a2e9f4078" 00:06:07.473 ], 00:06:07.473 "product_name": "Malloc disk", 00:06:07.473 "block_size": 512, 00:06:07.473 "num_blocks": 16384, 00:06:07.473 "uuid": "ba93d4b6-0009-4553-8c37-ed1a2e9f4078", 00:06:07.473 "assigned_rate_limits": { 00:06:07.473 "rw_ios_per_sec": 0, 00:06:07.473 "rw_mbytes_per_sec": 0, 00:06:07.473 "r_mbytes_per_sec": 0, 00:06:07.473 "w_mbytes_per_sec": 0 00:06:07.473 }, 00:06:07.473 "claimed": true, 00:06:07.473 "claim_type": "exclusive_write", 00:06:07.473 "zoned": false, 00:06:07.473 "supported_io_types": { 00:06:07.473 "read": true, 00:06:07.473 "write": true, 00:06:07.473 "unmap": true, 00:06:07.473 "flush": true, 00:06:07.473 "reset": true, 00:06:07.473 "nvme_admin": false, 00:06:07.473 "nvme_io": false, 00:06:07.473 "nvme_io_md": false, 00:06:07.473 "write_zeroes": true, 00:06:07.473 "zcopy": true, 00:06:07.473 "get_zone_info": false, 00:06:07.473 "zone_management": false, 00:06:07.473 "zone_append": false, 00:06:07.473 "compare": false, 00:06:07.473 "compare_and_write": false, 00:06:07.473 "abort": true, 00:06:07.473 "seek_hole": false, 00:06:07.473 "seek_data": false, 00:06:07.473 "copy": true, 00:06:07.473 "nvme_iov_md": false 00:06:07.473 }, 00:06:07.473 "memory_domains": [ 00:06:07.473 { 00:06:07.473 "dma_device_id": "system", 00:06:07.473 "dma_device_type": 1 00:06:07.473 }, 00:06:07.473 { 00:06:07.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:07.473 "dma_device_type": 2 00:06:07.473 } 00:06:07.473 ], 00:06:07.473 "driver_specific": {} 00:06:07.473 }, 00:06:07.473 { 00:06:07.473 "name": "Passthru0", 00:06:07.473 "aliases": [ 00:06:07.473 "6316a7c9-c0f9-5fa7-8288-c68c77922607" 00:06:07.473 ], 00:06:07.473 "product_name": "passthru", 00:06:07.473 "block_size": 512, 00:06:07.473 "num_blocks": 16384, 00:06:07.473 "uuid": "6316a7c9-c0f9-5fa7-8288-c68c77922607", 00:06:07.473 "assigned_rate_limits": { 00:06:07.473 "rw_ios_per_sec": 0, 00:06:07.473 "rw_mbytes_per_sec": 0, 00:06:07.473 "r_mbytes_per_sec": 0, 00:06:07.473 "w_mbytes_per_sec": 0 00:06:07.473 }, 00:06:07.473 "claimed": false, 00:06:07.473 "zoned": false, 00:06:07.473 "supported_io_types": { 00:06:07.473 "read": true, 00:06:07.473 "write": true, 00:06:07.473 "unmap": true, 00:06:07.473 "flush": true, 00:06:07.473 "reset": true, 00:06:07.473 "nvme_admin": false, 00:06:07.473 "nvme_io": false, 00:06:07.473 "nvme_io_md": false, 00:06:07.473 "write_zeroes": true, 00:06:07.473 "zcopy": true, 00:06:07.473 "get_zone_info": false, 00:06:07.474 "zone_management": false, 00:06:07.474 "zone_append": false, 00:06:07.474 "compare": false, 00:06:07.474 "compare_and_write": false, 00:06:07.474 "abort": true, 00:06:07.474 "seek_hole": false, 00:06:07.474 "seek_data": false, 00:06:07.474 "copy": true, 00:06:07.474 "nvme_iov_md": false 00:06:07.474 }, 00:06:07.474 "memory_domains": [ 00:06:07.474 { 00:06:07.474 "dma_device_id": "system", 00:06:07.474 "dma_device_type": 1 00:06:07.474 }, 00:06:07.474 { 00:06:07.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:07.474 "dma_device_type": 2 00:06:07.474 } 00:06:07.474 ], 00:06:07.474 "driver_specific": { 00:06:07.474 "passthru": { 00:06:07.474 "name": "Passthru0", 00:06:07.474 "base_bdev_name": "Malloc0" 00:06:07.474 } 00:06:07.474 } 00:06:07.474 } 00:06:07.474 ]' 00:06:07.474 12:26:12 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:07.474 12:26:12 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:07.474 12:26:12 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:07.474 12:26:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.474 12:26:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.474 12:26:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.474 12:26:12 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:07.474 12:26:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.474 12:26:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.474 12:26:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.474 12:26:12 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:07.474 12:26:12 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.474 12:26:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.474 12:26:12 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.474 12:26:12 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:07.474 12:26:12 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:07.474 ************************************ 00:06:07.474 END TEST rpc_integrity 00:06:07.474 ************************************ 00:06:07.474 12:26:12 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:07.474 00:06:07.474 real 0m0.326s 00:06:07.474 user 0m0.221s 00:06:07.474 sys 0m0.040s 00:06:07.474 12:26:12 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.474 12:26:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.474 12:26:12 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:07.474 12:26:12 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.474 12:26:12 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.474 12:26:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.474 ************************************ 00:06:07.474 START TEST rpc_plugins 00:06:07.474 ************************************ 00:06:07.474 12:26:12 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:06:07.474 12:26:12 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:07.474 12:26:12 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.474 12:26:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:07.474 12:26:12 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.474 12:26:12 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:07.474 12:26:12 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:07.474 12:26:12 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.474 12:26:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:07.474 12:26:12 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.474 12:26:12 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:07.474 { 00:06:07.474 "name": "Malloc1", 00:06:07.474 "aliases": [ 00:06:07.474 "f1229321-8e2f-4474-8ac0-465a4830a841" 00:06:07.474 ], 00:06:07.474 "product_name": "Malloc disk", 00:06:07.474 "block_size": 4096, 00:06:07.474 "num_blocks": 256, 00:06:07.474 "uuid": "f1229321-8e2f-4474-8ac0-465a4830a841", 00:06:07.474 "assigned_rate_limits": { 00:06:07.474 "rw_ios_per_sec": 0, 00:06:07.474 "rw_mbytes_per_sec": 0, 00:06:07.474 "r_mbytes_per_sec": 0, 00:06:07.474 "w_mbytes_per_sec": 0 00:06:07.474 }, 00:06:07.474 "claimed": false, 00:06:07.474 "zoned": false, 00:06:07.474 "supported_io_types": { 00:06:07.474 "read": true, 00:06:07.474 "write": true, 00:06:07.474 "unmap": true, 00:06:07.474 "flush": true, 00:06:07.474 "reset": true, 00:06:07.474 "nvme_admin": false, 00:06:07.474 "nvme_io": false, 00:06:07.474 "nvme_io_md": false, 00:06:07.474 "write_zeroes": true, 00:06:07.474 "zcopy": true, 00:06:07.474 "get_zone_info": false, 00:06:07.474 "zone_management": false, 00:06:07.474 "zone_append": false, 00:06:07.474 "compare": false, 00:06:07.474 "compare_and_write": false, 00:06:07.474 "abort": true, 00:06:07.474 "seek_hole": false, 00:06:07.474 "seek_data": false, 00:06:07.474 "copy": true, 00:06:07.474 "nvme_iov_md": false 00:06:07.474 }, 00:06:07.474 "memory_domains": [ 00:06:07.474 { 00:06:07.474 "dma_device_id": "system", 00:06:07.474 "dma_device_type": 1 00:06:07.474 }, 00:06:07.474 { 00:06:07.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:07.474 "dma_device_type": 2 00:06:07.474 } 00:06:07.474 ], 00:06:07.474 "driver_specific": {} 00:06:07.474 } 00:06:07.474 ]' 00:06:07.474 12:26:12 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:07.733 12:26:12 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:07.733 12:26:12 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:07.733 12:26:12 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.733 12:26:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:07.733 12:26:12 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.733 12:26:12 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:07.733 12:26:12 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.733 12:26:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:07.733 12:26:12 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.733 12:26:12 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:07.733 12:26:12 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:07.733 ************************************ 00:06:07.733 END TEST rpc_plugins 00:06:07.733 ************************************ 00:06:07.733 12:26:12 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:07.733 00:06:07.733 real 0m0.166s 00:06:07.733 user 0m0.107s 00:06:07.733 sys 0m0.022s 00:06:07.733 12:26:12 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.733 12:26:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:07.733 12:26:12 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:07.733 12:26:12 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.733 12:26:12 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.733 12:26:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.733 ************************************ 00:06:07.733 START TEST rpc_trace_cmd_test 00:06:07.733 ************************************ 00:06:07.733 12:26:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:06:07.733 12:26:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:07.733 12:26:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:07.733 12:26:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.733 12:26:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:07.733 12:26:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.733 12:26:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:07.733 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69940", 00:06:07.733 "tpoint_group_mask": "0x8", 00:06:07.733 "iscsi_conn": { 00:06:07.733 "mask": "0x2", 00:06:07.733 "tpoint_mask": "0x0" 00:06:07.733 }, 00:06:07.733 "scsi": { 00:06:07.733 "mask": "0x4", 00:06:07.733 "tpoint_mask": "0x0" 00:06:07.733 }, 00:06:07.733 "bdev": { 00:06:07.733 "mask": "0x8", 00:06:07.733 "tpoint_mask": "0xffffffffffffffff" 00:06:07.733 }, 00:06:07.733 "nvmf_rdma": { 00:06:07.733 "mask": "0x10", 00:06:07.733 "tpoint_mask": "0x0" 00:06:07.733 }, 00:06:07.733 "nvmf_tcp": { 00:06:07.733 "mask": "0x20", 00:06:07.733 "tpoint_mask": "0x0" 00:06:07.733 }, 00:06:07.733 "ftl": { 00:06:07.733 "mask": "0x40", 00:06:07.733 "tpoint_mask": "0x0" 00:06:07.733 }, 00:06:07.733 "blobfs": { 00:06:07.733 "mask": "0x80", 00:06:07.733 "tpoint_mask": "0x0" 00:06:07.733 }, 00:06:07.733 "dsa": { 00:06:07.733 "mask": "0x200", 00:06:07.733 "tpoint_mask": "0x0" 00:06:07.733 }, 00:06:07.733 "thread": { 00:06:07.733 "mask": "0x400", 00:06:07.733 "tpoint_mask": "0x0" 00:06:07.733 }, 00:06:07.733 "nvme_pcie": { 00:06:07.733 "mask": "0x800", 00:06:07.733 "tpoint_mask": "0x0" 00:06:07.733 }, 00:06:07.733 "iaa": { 00:06:07.733 "mask": "0x1000", 00:06:07.733 "tpoint_mask": "0x0" 00:06:07.733 }, 00:06:07.733 "nvme_tcp": { 00:06:07.733 "mask": "0x2000", 00:06:07.733 "tpoint_mask": "0x0" 00:06:07.733 }, 00:06:07.733 "bdev_nvme": { 00:06:07.733 "mask": "0x4000", 00:06:07.733 "tpoint_mask": "0x0" 00:06:07.733 }, 00:06:07.733 "sock": { 00:06:07.733 "mask": "0x8000", 00:06:07.733 "tpoint_mask": "0x0" 00:06:07.733 }, 00:06:07.733 "blob": { 00:06:07.733 "mask": "0x10000", 00:06:07.733 "tpoint_mask": "0x0" 00:06:07.733 }, 00:06:07.733 "bdev_raid": { 00:06:07.733 "mask": "0x20000", 00:06:07.733 "tpoint_mask": "0x0" 00:06:07.733 } 00:06:07.733 }' 00:06:07.733 12:26:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:07.733 12:26:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:06:07.733 12:26:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:07.992 12:26:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:07.992 12:26:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:07.992 12:26:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:07.992 12:26:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:07.992 12:26:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:07.992 12:26:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:07.992 ************************************ 00:06:07.992 END TEST rpc_trace_cmd_test 00:06:07.992 ************************************ 00:06:07.992 12:26:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:07.992 00:06:07.992 real 0m0.291s 00:06:07.992 user 0m0.238s 00:06:07.992 sys 0m0.035s 00:06:07.992 12:26:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.992 12:26:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:07.992 12:26:13 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:07.992 12:26:13 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:07.992 12:26:13 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:07.992 12:26:13 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.992 12:26:13 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.992 12:26:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.992 ************************************ 00:06:07.992 START TEST rpc_daemon_integrity 00:06:07.992 ************************************ 00:06:07.992 12:26:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:07.992 12:26:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:07.992 12:26:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.992 12:26:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.992 12:26:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.992 12:26:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:07.992 12:26:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:08.252 12:26:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:08.252 12:26:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:08.252 12:26:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.252 12:26:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.252 12:26:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.252 12:26:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:08.252 12:26:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:08.252 12:26:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.252 12:26:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.252 12:26:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.252 12:26:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:08.252 { 00:06:08.252 "name": "Malloc2", 00:06:08.252 "aliases": [ 00:06:08.252 "7817e7d6-d33f-4613-b630-c20ced4ab844" 00:06:08.252 ], 00:06:08.252 "product_name": "Malloc disk", 00:06:08.252 "block_size": 512, 00:06:08.252 "num_blocks": 16384, 00:06:08.252 "uuid": "7817e7d6-d33f-4613-b630-c20ced4ab844", 00:06:08.252 "assigned_rate_limits": { 00:06:08.252 "rw_ios_per_sec": 0, 00:06:08.252 "rw_mbytes_per_sec": 0, 00:06:08.252 "r_mbytes_per_sec": 0, 00:06:08.252 "w_mbytes_per_sec": 0 00:06:08.252 }, 00:06:08.252 "claimed": false, 00:06:08.252 "zoned": false, 00:06:08.252 "supported_io_types": { 00:06:08.252 "read": true, 00:06:08.252 "write": true, 00:06:08.252 "unmap": true, 00:06:08.252 "flush": true, 00:06:08.252 "reset": true, 00:06:08.252 "nvme_admin": false, 00:06:08.252 "nvme_io": false, 00:06:08.252 "nvme_io_md": false, 00:06:08.252 "write_zeroes": true, 00:06:08.252 "zcopy": true, 00:06:08.252 "get_zone_info": false, 00:06:08.252 "zone_management": false, 00:06:08.252 "zone_append": false, 00:06:08.252 "compare": false, 00:06:08.252 "compare_and_write": false, 00:06:08.252 "abort": true, 00:06:08.252 "seek_hole": false, 00:06:08.252 "seek_data": false, 00:06:08.252 "copy": true, 00:06:08.252 "nvme_iov_md": false 00:06:08.252 }, 00:06:08.252 "memory_domains": [ 00:06:08.252 { 00:06:08.252 "dma_device_id": "system", 00:06:08.252 "dma_device_type": 1 00:06:08.252 }, 00:06:08.252 { 00:06:08.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:08.252 "dma_device_type": 2 00:06:08.252 } 00:06:08.252 ], 00:06:08.252 "driver_specific": {} 00:06:08.252 } 00:06:08.252 ]' 00:06:08.252 12:26:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:08.252 12:26:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:08.252 12:26:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:08.252 12:26:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.252 12:26:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.252 [2024-11-19 12:26:13.379541] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:08.252 [2024-11-19 12:26:13.379744] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:08.252 [2024-11-19 12:26:13.379771] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20945a0 00:06:08.252 [2024-11-19 12:26:13.379781] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:08.252 [2024-11-19 12:26:13.381158] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:08.252 [2024-11-19 12:26:13.381192] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:08.252 Passthru0 00:06:08.252 12:26:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.252 12:26:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:08.252 12:26:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.252 12:26:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.252 12:26:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.252 12:26:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:08.252 { 00:06:08.252 "name": "Malloc2", 00:06:08.252 "aliases": [ 00:06:08.252 "7817e7d6-d33f-4613-b630-c20ced4ab844" 00:06:08.253 ], 00:06:08.253 "product_name": "Malloc disk", 00:06:08.253 "block_size": 512, 00:06:08.253 "num_blocks": 16384, 00:06:08.253 "uuid": "7817e7d6-d33f-4613-b630-c20ced4ab844", 00:06:08.253 "assigned_rate_limits": { 00:06:08.253 "rw_ios_per_sec": 0, 00:06:08.253 "rw_mbytes_per_sec": 0, 00:06:08.253 "r_mbytes_per_sec": 0, 00:06:08.253 "w_mbytes_per_sec": 0 00:06:08.253 }, 00:06:08.253 "claimed": true, 00:06:08.253 "claim_type": "exclusive_write", 00:06:08.253 "zoned": false, 00:06:08.253 "supported_io_types": { 00:06:08.253 "read": true, 00:06:08.253 "write": true, 00:06:08.253 "unmap": true, 00:06:08.253 "flush": true, 00:06:08.253 "reset": true, 00:06:08.253 "nvme_admin": false, 00:06:08.253 "nvme_io": false, 00:06:08.253 "nvme_io_md": false, 00:06:08.253 "write_zeroes": true, 00:06:08.253 "zcopy": true, 00:06:08.253 "get_zone_info": false, 00:06:08.253 "zone_management": false, 00:06:08.253 "zone_append": false, 00:06:08.253 "compare": false, 00:06:08.253 "compare_and_write": false, 00:06:08.253 "abort": true, 00:06:08.253 "seek_hole": false, 00:06:08.253 "seek_data": false, 00:06:08.253 "copy": true, 00:06:08.253 "nvme_iov_md": false 00:06:08.253 }, 00:06:08.253 "memory_domains": [ 00:06:08.253 { 00:06:08.253 "dma_device_id": "system", 00:06:08.253 "dma_device_type": 1 00:06:08.253 }, 00:06:08.253 { 00:06:08.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:08.253 "dma_device_type": 2 00:06:08.253 } 00:06:08.253 ], 00:06:08.253 "driver_specific": {} 00:06:08.253 }, 00:06:08.253 { 00:06:08.253 "name": "Passthru0", 00:06:08.253 "aliases": [ 00:06:08.253 "c73245a1-08b2-55e5-9ad3-f253366e16b6" 00:06:08.253 ], 00:06:08.253 "product_name": "passthru", 00:06:08.253 "block_size": 512, 00:06:08.253 "num_blocks": 16384, 00:06:08.253 "uuid": "c73245a1-08b2-55e5-9ad3-f253366e16b6", 00:06:08.253 "assigned_rate_limits": { 00:06:08.253 "rw_ios_per_sec": 0, 00:06:08.253 "rw_mbytes_per_sec": 0, 00:06:08.253 "r_mbytes_per_sec": 0, 00:06:08.253 "w_mbytes_per_sec": 0 00:06:08.253 }, 00:06:08.253 "claimed": false, 00:06:08.253 "zoned": false, 00:06:08.253 "supported_io_types": { 00:06:08.253 "read": true, 00:06:08.253 "write": true, 00:06:08.253 "unmap": true, 00:06:08.253 "flush": true, 00:06:08.253 "reset": true, 00:06:08.253 "nvme_admin": false, 00:06:08.253 "nvme_io": false, 00:06:08.253 "nvme_io_md": false, 00:06:08.253 "write_zeroes": true, 00:06:08.253 "zcopy": true, 00:06:08.253 "get_zone_info": false, 00:06:08.253 "zone_management": false, 00:06:08.253 "zone_append": false, 00:06:08.253 "compare": false, 00:06:08.253 "compare_and_write": false, 00:06:08.253 "abort": true, 00:06:08.253 "seek_hole": false, 00:06:08.253 "seek_data": false, 00:06:08.253 "copy": true, 00:06:08.253 "nvme_iov_md": false 00:06:08.253 }, 00:06:08.253 "memory_domains": [ 00:06:08.253 { 00:06:08.253 "dma_device_id": "system", 00:06:08.253 "dma_device_type": 1 00:06:08.253 }, 00:06:08.253 { 00:06:08.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:08.253 "dma_device_type": 2 00:06:08.253 } 00:06:08.253 ], 00:06:08.253 "driver_specific": { 00:06:08.253 "passthru": { 00:06:08.253 "name": "Passthru0", 00:06:08.253 "base_bdev_name": "Malloc2" 00:06:08.253 } 00:06:08.253 } 00:06:08.253 } 00:06:08.253 ]' 00:06:08.253 12:26:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:08.253 12:26:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:08.253 12:26:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:08.253 12:26:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.253 12:26:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.253 12:26:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.253 12:26:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:08.253 12:26:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.253 12:26:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.253 12:26:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.253 12:26:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:08.253 12:26:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.253 12:26:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.253 12:26:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.253 12:26:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:08.253 12:26:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:08.512 ************************************ 00:06:08.512 END TEST rpc_daemon_integrity 00:06:08.512 ************************************ 00:06:08.512 12:26:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:08.512 00:06:08.512 real 0m0.330s 00:06:08.512 user 0m0.231s 00:06:08.512 sys 0m0.032s 00:06:08.512 12:26:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.512 12:26:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.512 12:26:13 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:08.512 12:26:13 rpc -- rpc/rpc.sh@84 -- # killprocess 69940 00:06:08.512 12:26:13 rpc -- common/autotest_common.sh@950 -- # '[' -z 69940 ']' 00:06:08.512 12:26:13 rpc -- common/autotest_common.sh@954 -- # kill -0 69940 00:06:08.512 12:26:13 rpc -- common/autotest_common.sh@955 -- # uname 00:06:08.512 12:26:13 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:08.512 12:26:13 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69940 00:06:08.512 killing process with pid 69940 00:06:08.512 12:26:13 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:08.512 12:26:13 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:08.512 12:26:13 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69940' 00:06:08.512 12:26:13 rpc -- common/autotest_common.sh@969 -- # kill 69940 00:06:08.512 12:26:13 rpc -- common/autotest_common.sh@974 -- # wait 69940 00:06:08.771 00:06:08.771 real 0m2.217s 00:06:08.771 user 0m2.975s 00:06:08.771 sys 0m0.590s 00:06:08.771 12:26:13 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.771 ************************************ 00:06:08.771 END TEST rpc 00:06:08.771 ************************************ 00:06:08.771 12:26:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.771 12:26:13 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:08.771 12:26:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.771 12:26:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.771 12:26:13 -- common/autotest_common.sh@10 -- # set +x 00:06:08.771 ************************************ 00:06:08.771 START TEST skip_rpc 00:06:08.771 ************************************ 00:06:08.771 12:26:13 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:08.771 * Looking for test storage... 00:06:08.771 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:08.771 12:26:13 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:08.771 12:26:13 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:08.771 12:26:13 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:09.030 12:26:14 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:09.030 12:26:14 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.030 12:26:14 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.030 12:26:14 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.030 12:26:14 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.030 12:26:14 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.030 12:26:14 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.030 12:26:14 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.030 12:26:14 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.030 12:26:14 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.030 12:26:14 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.030 12:26:14 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.030 12:26:14 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:09.030 12:26:14 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:09.030 12:26:14 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.030 12:26:14 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.030 12:26:14 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:09.030 12:26:14 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:09.030 12:26:14 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.030 12:26:14 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:09.030 12:26:14 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.030 12:26:14 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:09.030 12:26:14 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:09.030 12:26:14 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.030 12:26:14 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:09.030 12:26:14 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.030 12:26:14 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.030 12:26:14 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.030 12:26:14 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:09.030 12:26:14 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.030 12:26:14 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:09.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.030 --rc genhtml_branch_coverage=1 00:06:09.030 --rc genhtml_function_coverage=1 00:06:09.030 --rc genhtml_legend=1 00:06:09.030 --rc geninfo_all_blocks=1 00:06:09.030 --rc geninfo_unexecuted_blocks=1 00:06:09.030 00:06:09.031 ' 00:06:09.031 12:26:14 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:09.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.031 --rc genhtml_branch_coverage=1 00:06:09.031 --rc genhtml_function_coverage=1 00:06:09.031 --rc genhtml_legend=1 00:06:09.031 --rc geninfo_all_blocks=1 00:06:09.031 --rc geninfo_unexecuted_blocks=1 00:06:09.031 00:06:09.031 ' 00:06:09.031 12:26:14 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:09.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.031 --rc genhtml_branch_coverage=1 00:06:09.031 --rc genhtml_function_coverage=1 00:06:09.031 --rc genhtml_legend=1 00:06:09.031 --rc geninfo_all_blocks=1 00:06:09.031 --rc geninfo_unexecuted_blocks=1 00:06:09.031 00:06:09.031 ' 00:06:09.031 12:26:14 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:09.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.031 --rc genhtml_branch_coverage=1 00:06:09.031 --rc genhtml_function_coverage=1 00:06:09.031 --rc genhtml_legend=1 00:06:09.031 --rc geninfo_all_blocks=1 00:06:09.031 --rc geninfo_unexecuted_blocks=1 00:06:09.031 00:06:09.031 ' 00:06:09.031 12:26:14 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:09.031 12:26:14 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:09.031 12:26:14 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:09.031 12:26:14 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.031 12:26:14 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.031 12:26:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.031 ************************************ 00:06:09.031 START TEST skip_rpc 00:06:09.031 ************************************ 00:06:09.031 12:26:14 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:09.031 12:26:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=70133 00:06:09.031 12:26:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:09.031 12:26:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:09.031 12:26:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:09.031 [2024-11-19 12:26:14.167067] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:09.031 [2024-11-19 12:26:14.167976] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70133 ] 00:06:09.291 [2024-11-19 12:26:14.313127] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.291 [2024-11-19 12:26:14.354288] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.291 [2024-11-19 12:26:14.394238] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:14.561 12:26:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:14.561 12:26:19 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:14.561 12:26:19 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:14.561 12:26:19 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:14.561 12:26:19 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.561 12:26:19 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:14.561 12:26:19 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.561 12:26:19 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:14.561 12:26:19 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.561 12:26:19 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.561 12:26:19 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:14.561 12:26:19 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:14.561 12:26:19 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:14.561 12:26:19 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:14.561 12:26:19 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:14.561 12:26:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:14.561 12:26:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 70133 00:06:14.561 12:26:19 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 70133 ']' 00:06:14.561 12:26:19 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 70133 00:06:14.561 12:26:19 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:14.561 12:26:19 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:14.561 12:26:19 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70133 00:06:14.561 killing process with pid 70133 00:06:14.561 12:26:19 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:14.561 12:26:19 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:14.561 12:26:19 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70133' 00:06:14.561 12:26:19 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 70133 00:06:14.561 12:26:19 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 70133 00:06:14.561 ************************************ 00:06:14.561 END TEST skip_rpc 00:06:14.561 ************************************ 00:06:14.561 00:06:14.561 real 0m5.274s 00:06:14.561 user 0m4.982s 00:06:14.561 sys 0m0.208s 00:06:14.561 12:26:19 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.561 12:26:19 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.561 12:26:19 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:14.561 12:26:19 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:14.561 12:26:19 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.561 12:26:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.561 ************************************ 00:06:14.561 START TEST skip_rpc_with_json 00:06:14.561 ************************************ 00:06:14.561 12:26:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:14.561 12:26:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:14.561 12:26:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=70219 00:06:14.561 12:26:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:14.561 12:26:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:14.561 12:26:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 70219 00:06:14.561 12:26:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 70219 ']' 00:06:14.561 12:26:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.561 12:26:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.561 12:26:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.561 12:26:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.561 12:26:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:14.561 [2024-11-19 12:26:19.494873] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:14.561 [2024-11-19 12:26:19.495013] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70219 ] 00:06:14.561 [2024-11-19 12:26:19.634434] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.561 [2024-11-19 12:26:19.668120] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.561 [2024-11-19 12:26:19.701962] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:14.561 12:26:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:14.561 12:26:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:14.561 12:26:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:14.561 12:26:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.562 12:26:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:14.562 [2024-11-19 12:26:19.817172] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:14.821 request: 00:06:14.821 { 00:06:14.821 "trtype": "tcp", 00:06:14.821 "method": "nvmf_get_transports", 00:06:14.821 "req_id": 1 00:06:14.821 } 00:06:14.821 Got JSON-RPC error response 00:06:14.821 response: 00:06:14.821 { 00:06:14.821 "code": -19, 00:06:14.821 "message": "No such device" 00:06:14.821 } 00:06:14.821 12:26:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:14.821 12:26:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:14.821 12:26:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.821 12:26:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:14.821 [2024-11-19 12:26:19.829246] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:14.821 12:26:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.821 12:26:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:14.821 12:26:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.821 12:26:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:14.821 12:26:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.821 12:26:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:14.821 { 00:06:14.821 "subsystems": [ 00:06:14.821 { 00:06:14.821 "subsystem": "fsdev", 00:06:14.821 "config": [ 00:06:14.821 { 00:06:14.821 "method": "fsdev_set_opts", 00:06:14.821 "params": { 00:06:14.821 "fsdev_io_pool_size": 65535, 00:06:14.821 "fsdev_io_cache_size": 256 00:06:14.821 } 00:06:14.821 } 00:06:14.821 ] 00:06:14.821 }, 00:06:14.821 { 00:06:14.821 "subsystem": "vfio_user_target", 00:06:14.821 "config": null 00:06:14.821 }, 00:06:14.821 { 00:06:14.821 "subsystem": "keyring", 00:06:14.821 "config": [] 00:06:14.821 }, 00:06:14.821 { 00:06:14.821 "subsystem": "iobuf", 00:06:14.821 "config": [ 00:06:14.821 { 00:06:14.821 "method": "iobuf_set_options", 00:06:14.821 "params": { 00:06:14.821 "small_pool_count": 8192, 00:06:14.821 "large_pool_count": 1024, 00:06:14.821 "small_bufsize": 8192, 00:06:14.821 "large_bufsize": 135168 00:06:14.821 } 00:06:14.821 } 00:06:14.821 ] 00:06:14.821 }, 00:06:14.821 { 00:06:14.821 "subsystem": "sock", 00:06:14.821 "config": [ 00:06:14.821 { 00:06:14.821 "method": "sock_set_default_impl", 00:06:14.821 "params": { 00:06:14.821 "impl_name": "uring" 00:06:14.821 } 00:06:14.821 }, 00:06:14.821 { 00:06:14.821 "method": "sock_impl_set_options", 00:06:14.821 "params": { 00:06:14.821 "impl_name": "ssl", 00:06:14.821 "recv_buf_size": 4096, 00:06:14.821 "send_buf_size": 4096, 00:06:14.821 "enable_recv_pipe": true, 00:06:14.821 "enable_quickack": false, 00:06:14.821 "enable_placement_id": 0, 00:06:14.821 "enable_zerocopy_send_server": true, 00:06:14.821 "enable_zerocopy_send_client": false, 00:06:14.821 "zerocopy_threshold": 0, 00:06:14.821 "tls_version": 0, 00:06:14.821 "enable_ktls": false 00:06:14.821 } 00:06:14.821 }, 00:06:14.821 { 00:06:14.821 "method": "sock_impl_set_options", 00:06:14.821 "params": { 00:06:14.821 "impl_name": "posix", 00:06:14.821 "recv_buf_size": 2097152, 00:06:14.821 "send_buf_size": 2097152, 00:06:14.821 "enable_recv_pipe": true, 00:06:14.821 "enable_quickack": false, 00:06:14.821 "enable_placement_id": 0, 00:06:14.821 "enable_zerocopy_send_server": true, 00:06:14.821 "enable_zerocopy_send_client": false, 00:06:14.821 "zerocopy_threshold": 0, 00:06:14.821 "tls_version": 0, 00:06:14.821 "enable_ktls": false 00:06:14.821 } 00:06:14.821 }, 00:06:14.821 { 00:06:14.821 "method": "sock_impl_set_options", 00:06:14.821 "params": { 00:06:14.821 "impl_name": "uring", 00:06:14.821 "recv_buf_size": 2097152, 00:06:14.821 "send_buf_size": 2097152, 00:06:14.821 "enable_recv_pipe": true, 00:06:14.821 "enable_quickack": false, 00:06:14.821 "enable_placement_id": 0, 00:06:14.821 "enable_zerocopy_send_server": false, 00:06:14.821 "enable_zerocopy_send_client": false, 00:06:14.821 "zerocopy_threshold": 0, 00:06:14.821 "tls_version": 0, 00:06:14.821 "enable_ktls": false 00:06:14.821 } 00:06:14.821 } 00:06:14.821 ] 00:06:14.821 }, 00:06:14.821 { 00:06:14.821 "subsystem": "vmd", 00:06:14.821 "config": [] 00:06:14.821 }, 00:06:14.821 { 00:06:14.821 "subsystem": "accel", 00:06:14.821 "config": [ 00:06:14.821 { 00:06:14.821 "method": "accel_set_options", 00:06:14.821 "params": { 00:06:14.821 "small_cache_size": 128, 00:06:14.821 "large_cache_size": 16, 00:06:14.821 "task_count": 2048, 00:06:14.821 "sequence_count": 2048, 00:06:14.821 "buf_count": 2048 00:06:14.821 } 00:06:14.821 } 00:06:14.821 ] 00:06:14.821 }, 00:06:14.821 { 00:06:14.821 "subsystem": "bdev", 00:06:14.821 "config": [ 00:06:14.821 { 00:06:14.821 "method": "bdev_set_options", 00:06:14.821 "params": { 00:06:14.821 "bdev_io_pool_size": 65535, 00:06:14.821 "bdev_io_cache_size": 256, 00:06:14.821 "bdev_auto_examine": true, 00:06:14.821 "iobuf_small_cache_size": 128, 00:06:14.821 "iobuf_large_cache_size": 16 00:06:14.821 } 00:06:14.821 }, 00:06:14.821 { 00:06:14.821 "method": "bdev_raid_set_options", 00:06:14.821 "params": { 00:06:14.821 "process_window_size_kb": 1024, 00:06:14.821 "process_max_bandwidth_mb_sec": 0 00:06:14.821 } 00:06:14.821 }, 00:06:14.821 { 00:06:14.821 "method": "bdev_iscsi_set_options", 00:06:14.821 "params": { 00:06:14.821 "timeout_sec": 30 00:06:14.821 } 00:06:14.821 }, 00:06:14.821 { 00:06:14.821 "method": "bdev_nvme_set_options", 00:06:14.821 "params": { 00:06:14.821 "action_on_timeout": "none", 00:06:14.821 "timeout_us": 0, 00:06:14.821 "timeout_admin_us": 0, 00:06:14.821 "keep_alive_timeout_ms": 10000, 00:06:14.821 "arbitration_burst": 0, 00:06:14.821 "low_priority_weight": 0, 00:06:14.821 "medium_priority_weight": 0, 00:06:14.821 "high_priority_weight": 0, 00:06:14.821 "nvme_adminq_poll_period_us": 10000, 00:06:14.821 "nvme_ioq_poll_period_us": 0, 00:06:14.821 "io_queue_requests": 0, 00:06:14.821 "delay_cmd_submit": true, 00:06:14.821 "transport_retry_count": 4, 00:06:14.821 "bdev_retry_count": 3, 00:06:14.821 "transport_ack_timeout": 0, 00:06:14.821 "ctrlr_loss_timeout_sec": 0, 00:06:14.821 "reconnect_delay_sec": 0, 00:06:14.821 "fast_io_fail_timeout_sec": 0, 00:06:14.821 "disable_auto_failback": false, 00:06:14.821 "generate_uuids": false, 00:06:14.821 "transport_tos": 0, 00:06:14.821 "nvme_error_stat": false, 00:06:14.821 "rdma_srq_size": 0, 00:06:14.821 "io_path_stat": false, 00:06:14.821 "allow_accel_sequence": false, 00:06:14.821 "rdma_max_cq_size": 0, 00:06:14.821 "rdma_cm_event_timeout_ms": 0, 00:06:14.821 "dhchap_digests": [ 00:06:14.821 "sha256", 00:06:14.821 "sha384", 00:06:14.821 "sha512" 00:06:14.821 ], 00:06:14.821 "dhchap_dhgroups": [ 00:06:14.821 "null", 00:06:14.821 "ffdhe2048", 00:06:14.821 "ffdhe3072", 00:06:14.821 "ffdhe4096", 00:06:14.821 "ffdhe6144", 00:06:14.821 "ffdhe8192" 00:06:14.821 ] 00:06:14.821 } 00:06:14.821 }, 00:06:14.821 { 00:06:14.821 "method": "bdev_nvme_set_hotplug", 00:06:14.821 "params": { 00:06:14.821 "period_us": 100000, 00:06:14.821 "enable": false 00:06:14.821 } 00:06:14.821 }, 00:06:14.821 { 00:06:14.821 "method": "bdev_wait_for_examine" 00:06:14.821 } 00:06:14.821 ] 00:06:14.821 }, 00:06:14.821 { 00:06:14.821 "subsystem": "scsi", 00:06:14.821 "config": null 00:06:14.821 }, 00:06:14.821 { 00:06:14.821 "subsystem": "scheduler", 00:06:14.821 "config": [ 00:06:14.821 { 00:06:14.821 "method": "framework_set_scheduler", 00:06:14.821 "params": { 00:06:14.821 "name": "static" 00:06:14.821 } 00:06:14.821 } 00:06:14.821 ] 00:06:14.821 }, 00:06:14.821 { 00:06:14.821 "subsystem": "vhost_scsi", 00:06:14.821 "config": [] 00:06:14.821 }, 00:06:14.822 { 00:06:14.822 "subsystem": "vhost_blk", 00:06:14.822 "config": [] 00:06:14.822 }, 00:06:14.822 { 00:06:14.822 "subsystem": "ublk", 00:06:14.822 "config": [] 00:06:14.822 }, 00:06:14.822 { 00:06:14.822 "subsystem": "nbd", 00:06:14.822 "config": [] 00:06:14.822 }, 00:06:14.822 { 00:06:14.822 "subsystem": "nvmf", 00:06:14.822 "config": [ 00:06:14.822 { 00:06:14.822 "method": "nvmf_set_config", 00:06:14.822 "params": { 00:06:14.822 "discovery_filter": "match_any", 00:06:14.822 "admin_cmd_passthru": { 00:06:14.822 "identify_ctrlr": false 00:06:14.822 }, 00:06:14.822 "dhchap_digests": [ 00:06:14.822 "sha256", 00:06:14.822 "sha384", 00:06:14.822 "sha512" 00:06:14.822 ], 00:06:14.822 "dhchap_dhgroups": [ 00:06:14.822 "null", 00:06:14.822 "ffdhe2048", 00:06:14.822 "ffdhe3072", 00:06:14.822 "ffdhe4096", 00:06:14.822 "ffdhe6144", 00:06:14.822 "ffdhe8192" 00:06:14.822 ] 00:06:14.822 } 00:06:14.822 }, 00:06:14.822 { 00:06:14.822 "method": "nvmf_set_max_subsystems", 00:06:14.822 "params": { 00:06:14.822 "max_subsystems": 1024 00:06:14.822 } 00:06:14.822 }, 00:06:14.822 { 00:06:14.822 "method": "nvmf_set_crdt", 00:06:14.822 "params": { 00:06:14.822 "crdt1": 0, 00:06:14.822 "crdt2": 0, 00:06:14.822 "crdt3": 0 00:06:14.822 } 00:06:14.822 }, 00:06:14.822 { 00:06:14.822 "method": "nvmf_create_transport", 00:06:14.822 "params": { 00:06:14.822 "trtype": "TCP", 00:06:14.822 "max_queue_depth": 128, 00:06:14.822 "max_io_qpairs_per_ctrlr": 127, 00:06:14.822 "in_capsule_data_size": 4096, 00:06:14.822 "max_io_size": 131072, 00:06:14.822 "io_unit_size": 131072, 00:06:14.822 "max_aq_depth": 128, 00:06:14.822 "num_shared_buffers": 511, 00:06:14.822 "buf_cache_size": 4294967295, 00:06:14.822 "dif_insert_or_strip": false, 00:06:14.822 "zcopy": false, 00:06:14.822 "c2h_success": true, 00:06:14.822 "sock_priority": 0, 00:06:14.822 "abort_timeout_sec": 1, 00:06:14.822 "ack_timeout": 0, 00:06:14.822 "data_wr_pool_size": 0 00:06:14.822 } 00:06:14.822 } 00:06:14.822 ] 00:06:14.822 }, 00:06:14.822 { 00:06:14.822 "subsystem": "iscsi", 00:06:14.822 "config": [ 00:06:14.822 { 00:06:14.822 "method": "iscsi_set_options", 00:06:14.822 "params": { 00:06:14.822 "node_base": "iqn.2016-06.io.spdk", 00:06:14.822 "max_sessions": 128, 00:06:14.822 "max_connections_per_session": 2, 00:06:14.822 "max_queue_depth": 64, 00:06:14.822 "default_time2wait": 2, 00:06:14.822 "default_time2retain": 20, 00:06:14.822 "first_burst_length": 8192, 00:06:14.822 "immediate_data": true, 00:06:14.822 "allow_duplicated_isid": false, 00:06:14.822 "error_recovery_level": 0, 00:06:14.822 "nop_timeout": 60, 00:06:14.822 "nop_in_interval": 30, 00:06:14.822 "disable_chap": false, 00:06:14.822 "require_chap": false, 00:06:14.822 "mutual_chap": false, 00:06:14.822 "chap_group": 0, 00:06:14.822 "max_large_datain_per_connection": 64, 00:06:14.822 "max_r2t_per_connection": 4, 00:06:14.822 "pdu_pool_size": 36864, 00:06:14.822 "immediate_data_pool_size": 16384, 00:06:14.822 "data_out_pool_size": 2048 00:06:14.822 } 00:06:14.822 } 00:06:14.822 ] 00:06:14.822 } 00:06:14.822 ] 00:06:14.822 } 00:06:14.822 12:26:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:14.822 12:26:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 70219 00:06:14.822 12:26:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 70219 ']' 00:06:14.822 12:26:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 70219 00:06:14.822 12:26:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:14.822 12:26:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:14.822 12:26:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70219 00:06:14.822 12:26:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:14.822 killing process with pid 70219 00:06:14.822 12:26:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:14.822 12:26:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70219' 00:06:14.822 12:26:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 70219 00:06:14.822 12:26:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 70219 00:06:15.080 12:26:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=70234 00:06:15.080 12:26:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:15.080 12:26:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:20.350 12:26:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 70234 00:06:20.350 12:26:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 70234 ']' 00:06:20.350 12:26:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 70234 00:06:20.350 12:26:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:20.350 12:26:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:20.350 12:26:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70234 00:06:20.350 12:26:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:20.350 killing process with pid 70234 00:06:20.351 12:26:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:20.351 12:26:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70234' 00:06:20.351 12:26:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 70234 00:06:20.351 12:26:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 70234 00:06:20.351 12:26:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:20.351 12:26:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:20.351 00:06:20.351 real 0m6.122s 00:06:20.351 user 0m5.873s 00:06:20.351 sys 0m0.429s 00:06:20.351 ************************************ 00:06:20.351 END TEST skip_rpc_with_json 00:06:20.351 ************************************ 00:06:20.351 12:26:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.351 12:26:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:20.351 12:26:25 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:20.351 12:26:25 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:20.351 12:26:25 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.351 12:26:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.351 ************************************ 00:06:20.351 START TEST skip_rpc_with_delay 00:06:20.351 ************************************ 00:06:20.351 12:26:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:20.351 12:26:25 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:20.351 12:26:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:20.351 12:26:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:20.351 12:26:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:20.351 12:26:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.351 12:26:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:20.351 12:26:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.351 12:26:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:20.351 12:26:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.351 12:26:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:20.351 12:26:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:20.351 12:26:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:20.610 [2024-11-19 12:26:25.666914] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:20.610 [2024-11-19 12:26:25.667049] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:20.610 12:26:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:20.610 12:26:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:20.610 12:26:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:20.610 12:26:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:20.610 00:06:20.610 real 0m0.092s 00:06:20.610 user 0m0.059s 00:06:20.610 sys 0m0.032s 00:06:20.610 ************************************ 00:06:20.610 12:26:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.610 12:26:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:20.610 END TEST skip_rpc_with_delay 00:06:20.610 ************************************ 00:06:20.610 12:26:25 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:20.610 12:26:25 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:20.610 12:26:25 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:20.610 12:26:25 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:20.610 12:26:25 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.610 12:26:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.610 ************************************ 00:06:20.610 START TEST exit_on_failed_rpc_init 00:06:20.610 ************************************ 00:06:20.610 12:26:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:20.610 12:26:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=70343 00:06:20.610 12:26:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 70343 00:06:20.610 12:26:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 70343 ']' 00:06:20.610 12:26:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:20.610 12:26:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.610 12:26:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:20.610 12:26:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.610 12:26:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:20.610 12:26:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:20.611 [2024-11-19 12:26:25.813199] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:20.611 [2024-11-19 12:26:25.813301] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70343 ] 00:06:20.870 [2024-11-19 12:26:25.950727] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.870 [2024-11-19 12:26:25.982468] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.870 [2024-11-19 12:26:26.016451] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:21.129 12:26:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:21.129 12:26:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:21.129 12:26:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:21.129 12:26:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:21.129 12:26:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:21.129 12:26:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:21.129 12:26:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:21.129 12:26:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.129 12:26:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:21.129 12:26:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.129 12:26:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:21.129 12:26:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.129 12:26:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:21.129 12:26:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:21.129 12:26:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:21.129 [2024-11-19 12:26:26.210050] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:21.129 [2024-11-19 12:26:26.210173] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70354 ] 00:06:21.129 [2024-11-19 12:26:26.350953] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.388 [2024-11-19 12:26:26.392127] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.388 [2024-11-19 12:26:26.392244] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:21.388 [2024-11-19 12:26:26.392261] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:21.388 [2024-11-19 12:26:26.392271] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:21.388 12:26:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:21.388 12:26:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:21.388 12:26:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:21.388 12:26:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:21.388 12:26:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:21.388 12:26:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:21.388 12:26:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:21.388 12:26:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 70343 00:06:21.388 12:26:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 70343 ']' 00:06:21.388 12:26:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 70343 00:06:21.388 12:26:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:21.388 12:26:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:21.388 12:26:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70343 00:06:21.388 12:26:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:21.388 killing process with pid 70343 00:06:21.388 12:26:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:21.388 12:26:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70343' 00:06:21.388 12:26:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 70343 00:06:21.388 12:26:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 70343 00:06:21.647 00:06:21.647 real 0m1.004s 00:06:21.647 user 0m1.192s 00:06:21.647 sys 0m0.268s 00:06:21.647 12:26:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.647 12:26:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:21.647 ************************************ 00:06:21.647 END TEST exit_on_failed_rpc_init 00:06:21.647 ************************************ 00:06:21.647 12:26:26 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:21.647 00:06:21.647 real 0m12.885s 00:06:21.647 user 0m12.271s 00:06:21.647 sys 0m1.150s 00:06:21.647 12:26:26 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.647 12:26:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.647 ************************************ 00:06:21.647 END TEST skip_rpc 00:06:21.647 ************************************ 00:06:21.647 12:26:26 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:21.647 12:26:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:21.647 12:26:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.647 12:26:26 -- common/autotest_common.sh@10 -- # set +x 00:06:21.647 ************************************ 00:06:21.647 START TEST rpc_client 00:06:21.647 ************************************ 00:06:21.647 12:26:26 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:21.907 * Looking for test storage... 00:06:21.907 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:21.907 12:26:26 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:21.907 12:26:26 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:06:21.907 12:26:26 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:21.907 12:26:27 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:21.907 12:26:27 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:21.907 12:26:27 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:21.907 12:26:27 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:21.907 12:26:27 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.907 12:26:27 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:21.907 12:26:27 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:21.907 12:26:27 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:21.907 12:26:27 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:21.907 12:26:27 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:21.907 12:26:27 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:21.907 12:26:27 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:21.907 12:26:27 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:21.907 12:26:27 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:21.907 12:26:27 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:21.907 12:26:27 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.907 12:26:27 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:21.907 12:26:27 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:21.907 12:26:27 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.907 12:26:27 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:21.907 12:26:27 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:21.907 12:26:27 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:21.908 12:26:27 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:21.908 12:26:27 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.908 12:26:27 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:21.908 12:26:27 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:21.908 12:26:27 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:21.908 12:26:27 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:21.908 12:26:27 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:21.908 12:26:27 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.908 12:26:27 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:21.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.908 --rc genhtml_branch_coverage=1 00:06:21.908 --rc genhtml_function_coverage=1 00:06:21.908 --rc genhtml_legend=1 00:06:21.908 --rc geninfo_all_blocks=1 00:06:21.908 --rc geninfo_unexecuted_blocks=1 00:06:21.908 00:06:21.908 ' 00:06:21.908 12:26:27 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:21.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.908 --rc genhtml_branch_coverage=1 00:06:21.908 --rc genhtml_function_coverage=1 00:06:21.908 --rc genhtml_legend=1 00:06:21.908 --rc geninfo_all_blocks=1 00:06:21.908 --rc geninfo_unexecuted_blocks=1 00:06:21.908 00:06:21.908 ' 00:06:21.908 12:26:27 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:21.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.908 --rc genhtml_branch_coverage=1 00:06:21.908 --rc genhtml_function_coverage=1 00:06:21.908 --rc genhtml_legend=1 00:06:21.908 --rc geninfo_all_blocks=1 00:06:21.908 --rc geninfo_unexecuted_blocks=1 00:06:21.908 00:06:21.908 ' 00:06:21.908 12:26:27 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:21.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.908 --rc genhtml_branch_coverage=1 00:06:21.908 --rc genhtml_function_coverage=1 00:06:21.908 --rc genhtml_legend=1 00:06:21.908 --rc geninfo_all_blocks=1 00:06:21.908 --rc geninfo_unexecuted_blocks=1 00:06:21.908 00:06:21.908 ' 00:06:21.908 12:26:27 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:21.908 OK 00:06:21.908 12:26:27 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:21.908 00:06:21.908 real 0m0.203s 00:06:21.908 user 0m0.126s 00:06:21.908 sys 0m0.088s 00:06:21.908 12:26:27 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.908 ************************************ 00:06:21.908 END TEST rpc_client 00:06:21.908 ************************************ 00:06:21.908 12:26:27 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:21.908 12:26:27 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:21.908 12:26:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:21.908 12:26:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.908 12:26:27 -- common/autotest_common.sh@10 -- # set +x 00:06:21.908 ************************************ 00:06:21.908 START TEST json_config 00:06:21.908 ************************************ 00:06:21.908 12:26:27 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:21.908 12:26:27 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:21.908 12:26:27 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:06:21.908 12:26:27 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:22.168 12:26:27 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:22.169 12:26:27 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.169 12:26:27 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.169 12:26:27 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.169 12:26:27 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.169 12:26:27 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.169 12:26:27 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.169 12:26:27 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.169 12:26:27 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.169 12:26:27 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.169 12:26:27 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.169 12:26:27 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.169 12:26:27 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:22.169 12:26:27 json_config -- scripts/common.sh@345 -- # : 1 00:06:22.169 12:26:27 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.169 12:26:27 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.169 12:26:27 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:22.169 12:26:27 json_config -- scripts/common.sh@353 -- # local d=1 00:06:22.169 12:26:27 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.169 12:26:27 json_config -- scripts/common.sh@355 -- # echo 1 00:06:22.169 12:26:27 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.169 12:26:27 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:22.169 12:26:27 json_config -- scripts/common.sh@353 -- # local d=2 00:06:22.169 12:26:27 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.169 12:26:27 json_config -- scripts/common.sh@355 -- # echo 2 00:06:22.169 12:26:27 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.169 12:26:27 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.169 12:26:27 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.169 12:26:27 json_config -- scripts/common.sh@368 -- # return 0 00:06:22.169 12:26:27 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.169 12:26:27 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:22.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.169 --rc genhtml_branch_coverage=1 00:06:22.169 --rc genhtml_function_coverage=1 00:06:22.169 --rc genhtml_legend=1 00:06:22.169 --rc geninfo_all_blocks=1 00:06:22.169 --rc geninfo_unexecuted_blocks=1 00:06:22.169 00:06:22.169 ' 00:06:22.169 12:26:27 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:22.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.169 --rc genhtml_branch_coverage=1 00:06:22.169 --rc genhtml_function_coverage=1 00:06:22.169 --rc genhtml_legend=1 00:06:22.169 --rc geninfo_all_blocks=1 00:06:22.169 --rc geninfo_unexecuted_blocks=1 00:06:22.169 00:06:22.169 ' 00:06:22.169 12:26:27 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:22.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.169 --rc genhtml_branch_coverage=1 00:06:22.169 --rc genhtml_function_coverage=1 00:06:22.169 --rc genhtml_legend=1 00:06:22.169 --rc geninfo_all_blocks=1 00:06:22.169 --rc geninfo_unexecuted_blocks=1 00:06:22.169 00:06:22.169 ' 00:06:22.169 12:26:27 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:22.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.169 --rc genhtml_branch_coverage=1 00:06:22.169 --rc genhtml_function_coverage=1 00:06:22.169 --rc genhtml_legend=1 00:06:22.169 --rc geninfo_all_blocks=1 00:06:22.169 --rc geninfo_unexecuted_blocks=1 00:06:22.169 00:06:22.169 ' 00:06:22.169 12:26:27 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:22.169 12:26:27 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:22.169 12:26:27 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:22.169 12:26:27 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:22.169 12:26:27 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:22.169 12:26:27 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:22.169 12:26:27 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:22.169 12:26:27 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:22.169 12:26:27 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:22.169 12:26:27 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:22.169 12:26:27 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:22.169 12:26:27 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:22.169 12:26:27 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:06:22.169 12:26:27 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:06:22.169 12:26:27 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:22.169 12:26:27 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:22.169 12:26:27 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:22.169 12:26:27 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:22.169 12:26:27 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:22.169 12:26:27 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:22.169 12:26:27 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:22.169 12:26:27 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:22.169 12:26:27 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:22.169 12:26:27 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.169 12:26:27 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.169 12:26:27 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.169 12:26:27 json_config -- paths/export.sh@5 -- # export PATH 00:06:22.169 12:26:27 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.169 12:26:27 json_config -- nvmf/common.sh@51 -- # : 0 00:06:22.169 12:26:27 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:22.169 12:26:27 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:22.169 12:26:27 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:22.169 12:26:27 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:22.169 12:26:27 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:22.169 12:26:27 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:22.169 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:22.169 12:26:27 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:22.169 12:26:27 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:22.169 12:26:27 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:22.169 12:26:27 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:22.169 12:26:27 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:22.169 12:26:27 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:22.169 12:26:27 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:22.169 12:26:27 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:22.169 12:26:27 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:22.169 12:26:27 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:22.169 12:26:27 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:22.169 12:26:27 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:22.169 12:26:27 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:22.169 12:26:27 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:22.169 12:26:27 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:06:22.169 12:26:27 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:22.169 12:26:27 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:22.169 12:26:27 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:22.169 INFO: JSON configuration test init 00:06:22.169 12:26:27 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:22.169 12:26:27 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:22.169 12:26:27 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:22.169 12:26:27 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:22.169 12:26:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:22.169 12:26:27 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:22.169 12:26:27 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:22.169 12:26:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:22.169 12:26:27 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:22.169 12:26:27 json_config -- json_config/common.sh@9 -- # local app=target 00:06:22.169 12:26:27 json_config -- json_config/common.sh@10 -- # shift 00:06:22.169 12:26:27 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:22.170 12:26:27 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:22.170 12:26:27 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:22.170 12:26:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:22.170 12:26:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:22.170 12:26:27 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=70488 00:06:22.170 Waiting for target to run... 00:06:22.170 12:26:27 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:22.170 12:26:27 json_config -- json_config/common.sh@25 -- # waitforlisten 70488 /var/tmp/spdk_tgt.sock 00:06:22.170 12:26:27 json_config -- common/autotest_common.sh@831 -- # '[' -z 70488 ']' 00:06:22.170 12:26:27 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:22.170 12:26:27 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:22.170 12:26:27 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:22.170 12:26:27 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:22.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:22.170 12:26:27 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:22.170 12:26:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:22.170 [2024-11-19 12:26:27.371091] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:22.170 [2024-11-19 12:26:27.371194] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70488 ] 00:06:22.429 [2024-11-19 12:26:27.666246] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.429 [2024-11-19 12:26:27.685927] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.374 12:26:28 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:23.374 12:26:28 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:23.374 00:06:23.374 12:26:28 json_config -- json_config/common.sh@26 -- # echo '' 00:06:23.374 12:26:28 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:23.374 12:26:28 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:23.374 12:26:28 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:23.374 12:26:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:23.374 12:26:28 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:23.374 12:26:28 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:23.374 12:26:28 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:23.374 12:26:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:23.374 12:26:28 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:23.374 12:26:28 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:23.374 12:26:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:23.657 [2024-11-19 12:26:28.741069] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:23.924 12:26:28 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:23.924 12:26:28 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:23.924 12:26:28 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:23.924 12:26:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:23.924 12:26:28 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:23.924 12:26:28 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:23.924 12:26:28 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:23.924 12:26:28 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:23.924 12:26:28 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:23.924 12:26:28 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:23.924 12:26:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:23.924 12:26:28 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:24.197 12:26:29 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:24.197 12:26:29 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:24.197 12:26:29 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:24.197 12:26:29 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:24.197 12:26:29 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:24.197 12:26:29 json_config -- json_config/json_config.sh@54 -- # sort 00:06:24.197 12:26:29 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:24.197 12:26:29 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:24.197 12:26:29 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:24.197 12:26:29 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:24.197 12:26:29 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:24.197 12:26:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.197 12:26:29 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:24.197 12:26:29 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:24.197 12:26:29 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:24.197 12:26:29 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:24.197 12:26:29 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:24.197 12:26:29 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:24.197 12:26:29 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:24.197 12:26:29 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:24.197 12:26:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.197 12:26:29 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:24.197 12:26:29 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:24.197 12:26:29 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:24.197 12:26:29 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:24.197 12:26:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:24.457 MallocForNvmf0 00:06:24.457 12:26:29 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:24.457 12:26:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:24.717 MallocForNvmf1 00:06:24.717 12:26:29 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:24.717 12:26:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:24.975 [2024-11-19 12:26:30.036444] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:24.975 12:26:30 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:24.975 12:26:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:25.234 12:26:30 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:25.234 12:26:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:25.494 12:26:30 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:25.494 12:26:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:25.753 12:26:30 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:25.753 12:26:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:25.753 [2024-11-19 12:26:30.976937] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:25.753 12:26:30 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:25.753 12:26:30 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:25.753 12:26:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:26.012 12:26:31 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:26.012 12:26:31 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:26.012 12:26:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:26.012 12:26:31 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:26.012 12:26:31 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:26.012 12:26:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:26.272 MallocBdevForConfigChangeCheck 00:06:26.272 12:26:31 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:26.272 12:26:31 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:26.272 12:26:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:26.272 12:26:31 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:26.272 12:26:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:26.531 INFO: shutting down applications... 00:06:26.531 12:26:31 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:26.531 12:26:31 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:26.531 12:26:31 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:26.531 12:26:31 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:26.531 12:26:31 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:26.791 Calling clear_iscsi_subsystem 00:06:26.791 Calling clear_nvmf_subsystem 00:06:26.791 Calling clear_nbd_subsystem 00:06:26.791 Calling clear_ublk_subsystem 00:06:26.791 Calling clear_vhost_blk_subsystem 00:06:26.791 Calling clear_vhost_scsi_subsystem 00:06:26.791 Calling clear_bdev_subsystem 00:06:26.791 12:26:32 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:06:26.791 12:26:32 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:26.791 12:26:32 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:26.791 12:26:32 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:26.791 12:26:32 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:26.791 12:26:32 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:06:27.360 12:26:32 json_config -- json_config/json_config.sh@352 -- # break 00:06:27.360 12:26:32 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:27.360 12:26:32 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:27.360 12:26:32 json_config -- json_config/common.sh@31 -- # local app=target 00:06:27.360 12:26:32 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:27.360 12:26:32 json_config -- json_config/common.sh@35 -- # [[ -n 70488 ]] 00:06:27.360 12:26:32 json_config -- json_config/common.sh@38 -- # kill -SIGINT 70488 00:06:27.360 12:26:32 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:27.360 12:26:32 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:27.360 12:26:32 json_config -- json_config/common.sh@41 -- # kill -0 70488 00:06:27.360 12:26:32 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:27.928 12:26:32 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:27.928 12:26:32 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:27.928 12:26:32 json_config -- json_config/common.sh@41 -- # kill -0 70488 00:06:27.928 12:26:32 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:27.928 12:26:32 json_config -- json_config/common.sh@43 -- # break 00:06:27.928 12:26:32 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:27.928 SPDK target shutdown done 00:06:27.928 12:26:32 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:27.928 INFO: relaunching applications... 00:06:27.928 12:26:32 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:27.928 12:26:32 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:27.928 12:26:32 json_config -- json_config/common.sh@9 -- # local app=target 00:06:27.928 12:26:32 json_config -- json_config/common.sh@10 -- # shift 00:06:27.928 12:26:32 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:27.928 12:26:32 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:27.928 12:26:32 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:27.928 12:26:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:27.928 12:26:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:27.928 12:26:32 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=70683 00:06:27.928 12:26:32 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:27.928 Waiting for target to run... 00:06:27.929 12:26:32 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:27.929 12:26:32 json_config -- json_config/common.sh@25 -- # waitforlisten 70683 /var/tmp/spdk_tgt.sock 00:06:27.929 12:26:32 json_config -- common/autotest_common.sh@831 -- # '[' -z 70683 ']' 00:06:27.929 12:26:32 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:27.929 12:26:32 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:27.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:27.929 12:26:32 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:27.929 12:26:32 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:27.929 12:26:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:27.929 [2024-11-19 12:26:33.034592] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:27.929 [2024-11-19 12:26:33.034675] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70683 ] 00:06:28.188 [2024-11-19 12:26:33.312003] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.188 [2024-11-19 12:26:33.334262] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.448 [2024-11-19 12:26:33.462293] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:28.448 [2024-11-19 12:26:33.650550] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:28.448 [2024-11-19 12:26:33.682628] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:29.017 12:26:33 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:29.017 12:26:33 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:29.017 00:06:29.017 12:26:33 json_config -- json_config/common.sh@26 -- # echo '' 00:06:29.017 12:26:33 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:29.017 INFO: Checking if target configuration is the same... 00:06:29.017 12:26:33 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:29.017 12:26:33 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:29.017 12:26:33 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:29.017 12:26:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:29.017 + '[' 2 -ne 2 ']' 00:06:29.017 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:29.017 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:29.017 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:29.017 +++ basename /dev/fd/62 00:06:29.017 ++ mktemp /tmp/62.XXX 00:06:29.017 + tmp_file_1=/tmp/62.dFg 00:06:29.017 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:29.017 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:29.017 + tmp_file_2=/tmp/spdk_tgt_config.json.aIx 00:06:29.017 + ret=0 00:06:29.017 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:29.276 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:29.276 + diff -u /tmp/62.dFg /tmp/spdk_tgt_config.json.aIx 00:06:29.276 + echo 'INFO: JSON config files are the same' 00:06:29.276 INFO: JSON config files are the same 00:06:29.276 + rm /tmp/62.dFg /tmp/spdk_tgt_config.json.aIx 00:06:29.276 + exit 0 00:06:29.276 12:26:34 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:29.276 INFO: changing configuration and checking if this can be detected... 00:06:29.276 12:26:34 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:29.276 12:26:34 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:29.276 12:26:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:29.536 12:26:34 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:29.536 12:26:34 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:29.536 12:26:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:29.536 + '[' 2 -ne 2 ']' 00:06:29.536 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:29.536 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:29.536 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:29.536 +++ basename /dev/fd/62 00:06:29.536 ++ mktemp /tmp/62.XXX 00:06:29.536 + tmp_file_1=/tmp/62.RzL 00:06:29.536 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:29.536 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:29.536 + tmp_file_2=/tmp/spdk_tgt_config.json.x5b 00:06:29.536 + ret=0 00:06:29.536 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:30.105 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:30.105 + diff -u /tmp/62.RzL /tmp/spdk_tgt_config.json.x5b 00:06:30.105 + ret=1 00:06:30.105 + echo '=== Start of file: /tmp/62.RzL ===' 00:06:30.105 + cat /tmp/62.RzL 00:06:30.105 + echo '=== End of file: /tmp/62.RzL ===' 00:06:30.105 + echo '' 00:06:30.105 + echo '=== Start of file: /tmp/spdk_tgt_config.json.x5b ===' 00:06:30.105 + cat /tmp/spdk_tgt_config.json.x5b 00:06:30.105 + echo '=== End of file: /tmp/spdk_tgt_config.json.x5b ===' 00:06:30.105 + echo '' 00:06:30.105 + rm /tmp/62.RzL /tmp/spdk_tgt_config.json.x5b 00:06:30.105 + exit 1 00:06:30.105 INFO: configuration change detected. 00:06:30.105 12:26:35 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:30.105 12:26:35 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:30.105 12:26:35 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:30.105 12:26:35 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:30.105 12:26:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:30.105 12:26:35 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:30.105 12:26:35 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:30.105 12:26:35 json_config -- json_config/json_config.sh@324 -- # [[ -n 70683 ]] 00:06:30.105 12:26:35 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:30.105 12:26:35 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:30.105 12:26:35 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:30.105 12:26:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:30.105 12:26:35 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:30.105 12:26:35 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:30.106 12:26:35 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:30.106 12:26:35 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:30.106 12:26:35 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:30.106 12:26:35 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:30.106 12:26:35 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:30.106 12:26:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:30.106 12:26:35 json_config -- json_config/json_config.sh@330 -- # killprocess 70683 00:06:30.106 12:26:35 json_config -- common/autotest_common.sh@950 -- # '[' -z 70683 ']' 00:06:30.106 12:26:35 json_config -- common/autotest_common.sh@954 -- # kill -0 70683 00:06:30.106 12:26:35 json_config -- common/autotest_common.sh@955 -- # uname 00:06:30.106 12:26:35 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:30.106 12:26:35 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70683 00:06:30.106 12:26:35 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:30.106 killing process with pid 70683 00:06:30.106 12:26:35 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:30.106 12:26:35 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70683' 00:06:30.106 12:26:35 json_config -- common/autotest_common.sh@969 -- # kill 70683 00:06:30.106 12:26:35 json_config -- common/autotest_common.sh@974 -- # wait 70683 00:06:30.365 12:26:35 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:30.365 12:26:35 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:30.365 12:26:35 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:30.365 12:26:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:30.365 12:26:35 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:30.365 INFO: Success 00:06:30.365 12:26:35 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:30.365 00:06:30.365 real 0m8.377s 00:06:30.365 user 0m12.231s 00:06:30.365 sys 0m1.363s 00:06:30.365 12:26:35 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.365 12:26:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:30.365 ************************************ 00:06:30.365 END TEST json_config 00:06:30.365 ************************************ 00:06:30.365 12:26:35 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:30.365 12:26:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:30.365 12:26:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.365 12:26:35 -- common/autotest_common.sh@10 -- # set +x 00:06:30.365 ************************************ 00:06:30.365 START TEST json_config_extra_key 00:06:30.365 ************************************ 00:06:30.365 12:26:35 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:30.365 12:26:35 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:30.365 12:26:35 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:06:30.365 12:26:35 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:30.626 12:26:35 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:30.626 12:26:35 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.626 12:26:35 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.626 12:26:35 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.626 12:26:35 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.626 12:26:35 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.626 12:26:35 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.626 12:26:35 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.626 12:26:35 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.626 12:26:35 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.626 12:26:35 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.626 12:26:35 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.626 12:26:35 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:30.626 12:26:35 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:30.626 12:26:35 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.626 12:26:35 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.626 12:26:35 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:30.626 12:26:35 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:30.626 12:26:35 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.626 12:26:35 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:30.626 12:26:35 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.626 12:26:35 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:30.626 12:26:35 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:30.626 12:26:35 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.626 12:26:35 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:30.626 12:26:35 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.626 12:26:35 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.626 12:26:35 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.626 12:26:35 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:30.626 12:26:35 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.626 12:26:35 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:30.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.626 --rc genhtml_branch_coverage=1 00:06:30.626 --rc genhtml_function_coverage=1 00:06:30.626 --rc genhtml_legend=1 00:06:30.626 --rc geninfo_all_blocks=1 00:06:30.626 --rc geninfo_unexecuted_blocks=1 00:06:30.626 00:06:30.626 ' 00:06:30.626 12:26:35 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:30.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.626 --rc genhtml_branch_coverage=1 00:06:30.626 --rc genhtml_function_coverage=1 00:06:30.626 --rc genhtml_legend=1 00:06:30.626 --rc geninfo_all_blocks=1 00:06:30.626 --rc geninfo_unexecuted_blocks=1 00:06:30.626 00:06:30.626 ' 00:06:30.626 12:26:35 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:30.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.626 --rc genhtml_branch_coverage=1 00:06:30.626 --rc genhtml_function_coverage=1 00:06:30.626 --rc genhtml_legend=1 00:06:30.626 --rc geninfo_all_blocks=1 00:06:30.626 --rc geninfo_unexecuted_blocks=1 00:06:30.626 00:06:30.626 ' 00:06:30.626 12:26:35 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:30.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.626 --rc genhtml_branch_coverage=1 00:06:30.626 --rc genhtml_function_coverage=1 00:06:30.626 --rc genhtml_legend=1 00:06:30.626 --rc geninfo_all_blocks=1 00:06:30.626 --rc geninfo_unexecuted_blocks=1 00:06:30.626 00:06:30.626 ' 00:06:30.626 12:26:35 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:30.626 12:26:35 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:30.626 12:26:35 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:30.626 12:26:35 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:30.626 12:26:35 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:30.626 12:26:35 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:30.626 12:26:35 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:30.626 12:26:35 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:30.626 12:26:35 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:30.626 12:26:35 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:30.626 12:26:35 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:30.626 12:26:35 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:30.626 12:26:35 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:06:30.626 12:26:35 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:06:30.626 12:26:35 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:30.626 12:26:35 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:30.626 12:26:35 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:30.626 12:26:35 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:30.626 12:26:35 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:30.626 12:26:35 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:30.626 12:26:35 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:30.626 12:26:35 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:30.626 12:26:35 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:30.626 12:26:35 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.626 12:26:35 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.626 12:26:35 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.626 12:26:35 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:30.626 12:26:35 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.626 12:26:35 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:30.626 12:26:35 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:30.626 12:26:35 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:30.626 12:26:35 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:30.626 12:26:35 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:30.626 12:26:35 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:30.626 12:26:35 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:30.626 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:30.626 12:26:35 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:30.626 12:26:35 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:30.626 12:26:35 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:30.626 12:26:35 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:30.627 12:26:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:30.627 12:26:35 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:30.627 12:26:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:30.627 12:26:35 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:30.627 12:26:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:30.627 12:26:35 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:30.627 12:26:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:30.627 12:26:35 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:30.627 INFO: launching applications... 00:06:30.627 12:26:35 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:30.627 12:26:35 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:30.627 12:26:35 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:30.627 12:26:35 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:30.627 12:26:35 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:30.627 12:26:35 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:30.627 12:26:35 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:30.627 12:26:35 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:30.627 12:26:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:30.627 12:26:35 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:30.627 12:26:35 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=70832 00:06:30.627 Waiting for target to run... 00:06:30.627 12:26:35 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:30.627 12:26:35 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 70832 /var/tmp/spdk_tgt.sock 00:06:30.627 12:26:35 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:30.627 12:26:35 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 70832 ']' 00:06:30.627 12:26:35 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:30.627 12:26:35 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:30.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:30.627 12:26:35 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:30.627 12:26:35 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:30.627 12:26:35 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:30.627 [2024-11-19 12:26:35.767744] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:30.627 [2024-11-19 12:26:35.767854] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70832 ] 00:06:30.886 [2024-11-19 12:26:36.062303] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.886 [2024-11-19 12:26:36.081520] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.886 [2024-11-19 12:26:36.104038] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:31.823 12:26:36 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:31.823 12:26:36 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:31.823 00:06:31.823 12:26:36 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:31.823 INFO: shutting down applications... 00:06:31.823 12:26:36 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:31.823 12:26:36 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:31.823 12:26:36 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:31.823 12:26:36 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:31.823 12:26:36 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 70832 ]] 00:06:31.823 12:26:36 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 70832 00:06:31.823 12:26:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:31.823 12:26:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:31.823 12:26:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70832 00:06:31.823 12:26:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:32.082 12:26:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:32.082 12:26:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:32.082 12:26:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70832 00:06:32.082 12:26:37 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:32.082 12:26:37 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:32.082 12:26:37 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:32.082 SPDK target shutdown done 00:06:32.082 12:26:37 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:32.082 Success 00:06:32.082 12:26:37 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:32.082 ************************************ 00:06:32.082 END TEST json_config_extra_key 00:06:32.082 ************************************ 00:06:32.082 00:06:32.082 real 0m1.789s 00:06:32.082 user 0m1.660s 00:06:32.082 sys 0m0.329s 00:06:32.082 12:26:37 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.082 12:26:37 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:32.340 12:26:37 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:32.340 12:26:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.340 12:26:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.340 12:26:37 -- common/autotest_common.sh@10 -- # set +x 00:06:32.340 ************************************ 00:06:32.340 START TEST alias_rpc 00:06:32.340 ************************************ 00:06:32.340 12:26:37 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:32.340 * Looking for test storage... 00:06:32.340 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:32.340 12:26:37 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:32.340 12:26:37 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:32.340 12:26:37 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:32.340 12:26:37 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:32.340 12:26:37 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:32.340 12:26:37 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:32.340 12:26:37 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:32.340 12:26:37 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:32.340 12:26:37 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:32.340 12:26:37 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:32.340 12:26:37 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:32.340 12:26:37 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:32.340 12:26:37 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:32.340 12:26:37 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:32.340 12:26:37 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:32.340 12:26:37 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:32.340 12:26:37 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:32.341 12:26:37 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:32.341 12:26:37 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:32.341 12:26:37 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:32.341 12:26:37 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:32.341 12:26:37 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:32.341 12:26:37 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:32.341 12:26:37 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:32.341 12:26:37 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:32.341 12:26:37 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:32.341 12:26:37 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:32.341 12:26:37 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:32.341 12:26:37 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:32.341 12:26:37 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:32.341 12:26:37 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:32.341 12:26:37 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:32.341 12:26:37 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:32.341 12:26:37 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:32.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.341 --rc genhtml_branch_coverage=1 00:06:32.341 --rc genhtml_function_coverage=1 00:06:32.341 --rc genhtml_legend=1 00:06:32.341 --rc geninfo_all_blocks=1 00:06:32.341 --rc geninfo_unexecuted_blocks=1 00:06:32.341 00:06:32.341 ' 00:06:32.341 12:26:37 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:32.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.341 --rc genhtml_branch_coverage=1 00:06:32.341 --rc genhtml_function_coverage=1 00:06:32.341 --rc genhtml_legend=1 00:06:32.341 --rc geninfo_all_blocks=1 00:06:32.341 --rc geninfo_unexecuted_blocks=1 00:06:32.341 00:06:32.341 ' 00:06:32.341 12:26:37 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:32.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.341 --rc genhtml_branch_coverage=1 00:06:32.341 --rc genhtml_function_coverage=1 00:06:32.341 --rc genhtml_legend=1 00:06:32.341 --rc geninfo_all_blocks=1 00:06:32.341 --rc geninfo_unexecuted_blocks=1 00:06:32.341 00:06:32.341 ' 00:06:32.341 12:26:37 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:32.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.341 --rc genhtml_branch_coverage=1 00:06:32.341 --rc genhtml_function_coverage=1 00:06:32.341 --rc genhtml_legend=1 00:06:32.341 --rc geninfo_all_blocks=1 00:06:32.341 --rc geninfo_unexecuted_blocks=1 00:06:32.341 00:06:32.341 ' 00:06:32.341 12:26:37 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:32.341 12:26:37 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=70910 00:06:32.341 12:26:37 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:32.341 12:26:37 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 70910 00:06:32.341 12:26:37 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 70910 ']' 00:06:32.341 12:26:37 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.341 12:26:37 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:32.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.341 12:26:37 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.341 12:26:37 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:32.341 12:26:37 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.600 [2024-11-19 12:26:37.607602] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:32.600 [2024-11-19 12:26:37.607720] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70910 ] 00:06:32.600 [2024-11-19 12:26:37.747501] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.600 [2024-11-19 12:26:37.779341] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.600 [2024-11-19 12:26:37.812594] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:32.859 12:26:37 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.859 12:26:37 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:32.859 12:26:37 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:33.118 12:26:38 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 70910 00:06:33.118 12:26:38 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 70910 ']' 00:06:33.118 12:26:38 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 70910 00:06:33.118 12:26:38 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:33.118 12:26:38 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:33.118 12:26:38 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70910 00:06:33.118 12:26:38 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:33.118 12:26:38 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:33.118 killing process with pid 70910 00:06:33.118 12:26:38 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70910' 00:06:33.118 12:26:38 alias_rpc -- common/autotest_common.sh@969 -- # kill 70910 00:06:33.118 12:26:38 alias_rpc -- common/autotest_common.sh@974 -- # wait 70910 00:06:33.377 00:06:33.377 real 0m1.165s 00:06:33.377 user 0m1.397s 00:06:33.377 sys 0m0.303s 00:06:33.377 12:26:38 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:33.377 12:26:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.377 ************************************ 00:06:33.377 END TEST alias_rpc 00:06:33.377 ************************************ 00:06:33.377 12:26:38 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:33.377 12:26:38 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:33.377 12:26:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:33.377 12:26:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.377 12:26:38 -- common/autotest_common.sh@10 -- # set +x 00:06:33.377 ************************************ 00:06:33.377 START TEST spdkcli_tcp 00:06:33.377 ************************************ 00:06:33.377 12:26:38 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:33.637 * Looking for test storage... 00:06:33.637 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:33.637 12:26:38 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:33.637 12:26:38 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:06:33.637 12:26:38 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:33.637 12:26:38 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:33.637 12:26:38 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:33.637 12:26:38 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:33.637 12:26:38 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:33.637 12:26:38 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:33.637 12:26:38 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:33.637 12:26:38 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:33.637 12:26:38 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:33.637 12:26:38 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:33.637 12:26:38 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:33.637 12:26:38 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:33.637 12:26:38 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:33.637 12:26:38 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:33.637 12:26:38 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:33.637 12:26:38 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:33.637 12:26:38 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:33.637 12:26:38 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:33.637 12:26:38 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:33.637 12:26:38 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:33.637 12:26:38 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:33.637 12:26:38 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:33.637 12:26:38 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:33.637 12:26:38 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:33.637 12:26:38 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:33.637 12:26:38 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:33.637 12:26:38 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:33.637 12:26:38 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:33.637 12:26:38 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:33.637 12:26:38 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:33.637 12:26:38 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:33.637 12:26:38 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:33.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.637 --rc genhtml_branch_coverage=1 00:06:33.637 --rc genhtml_function_coverage=1 00:06:33.637 --rc genhtml_legend=1 00:06:33.637 --rc geninfo_all_blocks=1 00:06:33.637 --rc geninfo_unexecuted_blocks=1 00:06:33.637 00:06:33.637 ' 00:06:33.637 12:26:38 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:33.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.637 --rc genhtml_branch_coverage=1 00:06:33.637 --rc genhtml_function_coverage=1 00:06:33.637 --rc genhtml_legend=1 00:06:33.637 --rc geninfo_all_blocks=1 00:06:33.637 --rc geninfo_unexecuted_blocks=1 00:06:33.637 00:06:33.637 ' 00:06:33.637 12:26:38 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:33.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.637 --rc genhtml_branch_coverage=1 00:06:33.637 --rc genhtml_function_coverage=1 00:06:33.637 --rc genhtml_legend=1 00:06:33.637 --rc geninfo_all_blocks=1 00:06:33.637 --rc geninfo_unexecuted_blocks=1 00:06:33.637 00:06:33.637 ' 00:06:33.637 12:26:38 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:33.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.637 --rc genhtml_branch_coverage=1 00:06:33.637 --rc genhtml_function_coverage=1 00:06:33.637 --rc genhtml_legend=1 00:06:33.637 --rc geninfo_all_blocks=1 00:06:33.637 --rc geninfo_unexecuted_blocks=1 00:06:33.637 00:06:33.637 ' 00:06:33.637 12:26:38 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:33.637 12:26:38 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:33.637 12:26:38 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:33.637 12:26:38 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:33.637 12:26:38 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:33.637 12:26:38 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:33.637 12:26:38 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:33.637 12:26:38 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:33.637 12:26:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:33.637 12:26:38 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=70981 00:06:33.637 12:26:38 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 70981 00:06:33.637 12:26:38 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:33.637 12:26:38 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 70981 ']' 00:06:33.637 12:26:38 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.637 12:26:38 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:33.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.637 12:26:38 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.637 12:26:38 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:33.637 12:26:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:33.637 [2024-11-19 12:26:38.840183] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:33.637 [2024-11-19 12:26:38.840287] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70981 ] 00:06:33.896 [2024-11-19 12:26:38.979118] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:33.896 [2024-11-19 12:26:39.012093] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.896 [2024-11-19 12:26:39.012100] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.897 [2024-11-19 12:26:39.049648] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:34.155 12:26:39 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:34.155 12:26:39 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:34.155 12:26:39 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=70990 00:06:34.155 12:26:39 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:34.155 12:26:39 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:34.155 [ 00:06:34.155 "bdev_malloc_delete", 00:06:34.155 "bdev_malloc_create", 00:06:34.155 "bdev_null_resize", 00:06:34.155 "bdev_null_delete", 00:06:34.155 "bdev_null_create", 00:06:34.155 "bdev_nvme_cuse_unregister", 00:06:34.155 "bdev_nvme_cuse_register", 00:06:34.155 "bdev_opal_new_user", 00:06:34.155 "bdev_opal_set_lock_state", 00:06:34.155 "bdev_opal_delete", 00:06:34.155 "bdev_opal_get_info", 00:06:34.155 "bdev_opal_create", 00:06:34.155 "bdev_nvme_opal_revert", 00:06:34.155 "bdev_nvme_opal_init", 00:06:34.155 "bdev_nvme_send_cmd", 00:06:34.155 "bdev_nvme_set_keys", 00:06:34.155 "bdev_nvme_get_path_iostat", 00:06:34.155 "bdev_nvme_get_mdns_discovery_info", 00:06:34.155 "bdev_nvme_stop_mdns_discovery", 00:06:34.155 "bdev_nvme_start_mdns_discovery", 00:06:34.155 "bdev_nvme_set_multipath_policy", 00:06:34.155 "bdev_nvme_set_preferred_path", 00:06:34.155 "bdev_nvme_get_io_paths", 00:06:34.155 "bdev_nvme_remove_error_injection", 00:06:34.155 "bdev_nvme_add_error_injection", 00:06:34.155 "bdev_nvme_get_discovery_info", 00:06:34.155 "bdev_nvme_stop_discovery", 00:06:34.155 "bdev_nvme_start_discovery", 00:06:34.155 "bdev_nvme_get_controller_health_info", 00:06:34.155 "bdev_nvme_disable_controller", 00:06:34.155 "bdev_nvme_enable_controller", 00:06:34.155 "bdev_nvme_reset_controller", 00:06:34.155 "bdev_nvme_get_transport_statistics", 00:06:34.155 "bdev_nvme_apply_firmware", 00:06:34.155 "bdev_nvme_detach_controller", 00:06:34.155 "bdev_nvme_get_controllers", 00:06:34.155 "bdev_nvme_attach_controller", 00:06:34.155 "bdev_nvme_set_hotplug", 00:06:34.155 "bdev_nvme_set_options", 00:06:34.155 "bdev_passthru_delete", 00:06:34.155 "bdev_passthru_create", 00:06:34.155 "bdev_lvol_set_parent_bdev", 00:06:34.155 "bdev_lvol_set_parent", 00:06:34.155 "bdev_lvol_check_shallow_copy", 00:06:34.155 "bdev_lvol_start_shallow_copy", 00:06:34.155 "bdev_lvol_grow_lvstore", 00:06:34.155 "bdev_lvol_get_lvols", 00:06:34.155 "bdev_lvol_get_lvstores", 00:06:34.155 "bdev_lvol_delete", 00:06:34.155 "bdev_lvol_set_read_only", 00:06:34.155 "bdev_lvol_resize", 00:06:34.155 "bdev_lvol_decouple_parent", 00:06:34.155 "bdev_lvol_inflate", 00:06:34.155 "bdev_lvol_rename", 00:06:34.155 "bdev_lvol_clone_bdev", 00:06:34.155 "bdev_lvol_clone", 00:06:34.155 "bdev_lvol_snapshot", 00:06:34.155 "bdev_lvol_create", 00:06:34.155 "bdev_lvol_delete_lvstore", 00:06:34.155 "bdev_lvol_rename_lvstore", 00:06:34.155 "bdev_lvol_create_lvstore", 00:06:34.155 "bdev_raid_set_options", 00:06:34.155 "bdev_raid_remove_base_bdev", 00:06:34.155 "bdev_raid_add_base_bdev", 00:06:34.155 "bdev_raid_delete", 00:06:34.155 "bdev_raid_create", 00:06:34.155 "bdev_raid_get_bdevs", 00:06:34.155 "bdev_error_inject_error", 00:06:34.155 "bdev_error_delete", 00:06:34.155 "bdev_error_create", 00:06:34.155 "bdev_split_delete", 00:06:34.155 "bdev_split_create", 00:06:34.155 "bdev_delay_delete", 00:06:34.155 "bdev_delay_create", 00:06:34.155 "bdev_delay_update_latency", 00:06:34.155 "bdev_zone_block_delete", 00:06:34.156 "bdev_zone_block_create", 00:06:34.156 "blobfs_create", 00:06:34.156 "blobfs_detect", 00:06:34.156 "blobfs_set_cache_size", 00:06:34.156 "bdev_aio_delete", 00:06:34.156 "bdev_aio_rescan", 00:06:34.156 "bdev_aio_create", 00:06:34.156 "bdev_ftl_set_property", 00:06:34.156 "bdev_ftl_get_properties", 00:06:34.156 "bdev_ftl_get_stats", 00:06:34.156 "bdev_ftl_unmap", 00:06:34.156 "bdev_ftl_unload", 00:06:34.156 "bdev_ftl_delete", 00:06:34.156 "bdev_ftl_load", 00:06:34.156 "bdev_ftl_create", 00:06:34.156 "bdev_virtio_attach_controller", 00:06:34.156 "bdev_virtio_scsi_get_devices", 00:06:34.156 "bdev_virtio_detach_controller", 00:06:34.156 "bdev_virtio_blk_set_hotplug", 00:06:34.156 "bdev_iscsi_delete", 00:06:34.156 "bdev_iscsi_create", 00:06:34.156 "bdev_iscsi_set_options", 00:06:34.156 "bdev_uring_delete", 00:06:34.156 "bdev_uring_rescan", 00:06:34.156 "bdev_uring_create", 00:06:34.156 "accel_error_inject_error", 00:06:34.156 "ioat_scan_accel_module", 00:06:34.156 "dsa_scan_accel_module", 00:06:34.156 "iaa_scan_accel_module", 00:06:34.156 "vfu_virtio_create_fs_endpoint", 00:06:34.156 "vfu_virtio_create_scsi_endpoint", 00:06:34.156 "vfu_virtio_scsi_remove_target", 00:06:34.156 "vfu_virtio_scsi_add_target", 00:06:34.156 "vfu_virtio_create_blk_endpoint", 00:06:34.156 "vfu_virtio_delete_endpoint", 00:06:34.156 "keyring_file_remove_key", 00:06:34.156 "keyring_file_add_key", 00:06:34.156 "keyring_linux_set_options", 00:06:34.156 "fsdev_aio_delete", 00:06:34.156 "fsdev_aio_create", 00:06:34.156 "iscsi_get_histogram", 00:06:34.156 "iscsi_enable_histogram", 00:06:34.156 "iscsi_set_options", 00:06:34.156 "iscsi_get_auth_groups", 00:06:34.156 "iscsi_auth_group_remove_secret", 00:06:34.156 "iscsi_auth_group_add_secret", 00:06:34.156 "iscsi_delete_auth_group", 00:06:34.156 "iscsi_create_auth_group", 00:06:34.156 "iscsi_set_discovery_auth", 00:06:34.156 "iscsi_get_options", 00:06:34.156 "iscsi_target_node_request_logout", 00:06:34.156 "iscsi_target_node_set_redirect", 00:06:34.156 "iscsi_target_node_set_auth", 00:06:34.156 "iscsi_target_node_add_lun", 00:06:34.156 "iscsi_get_stats", 00:06:34.156 "iscsi_get_connections", 00:06:34.156 "iscsi_portal_group_set_auth", 00:06:34.156 "iscsi_start_portal_group", 00:06:34.156 "iscsi_delete_portal_group", 00:06:34.156 "iscsi_create_portal_group", 00:06:34.156 "iscsi_get_portal_groups", 00:06:34.156 "iscsi_delete_target_node", 00:06:34.156 "iscsi_target_node_remove_pg_ig_maps", 00:06:34.156 "iscsi_target_node_add_pg_ig_maps", 00:06:34.156 "iscsi_create_target_node", 00:06:34.156 "iscsi_get_target_nodes", 00:06:34.156 "iscsi_delete_initiator_group", 00:06:34.156 "iscsi_initiator_group_remove_initiators", 00:06:34.156 "iscsi_initiator_group_add_initiators", 00:06:34.156 "iscsi_create_initiator_group", 00:06:34.156 "iscsi_get_initiator_groups", 00:06:34.156 "nvmf_set_crdt", 00:06:34.156 "nvmf_set_config", 00:06:34.156 "nvmf_set_max_subsystems", 00:06:34.156 "nvmf_stop_mdns_prr", 00:06:34.156 "nvmf_publish_mdns_prr", 00:06:34.156 "nvmf_subsystem_get_listeners", 00:06:34.156 "nvmf_subsystem_get_qpairs", 00:06:34.156 "nvmf_subsystem_get_controllers", 00:06:34.156 "nvmf_get_stats", 00:06:34.156 "nvmf_get_transports", 00:06:34.156 "nvmf_create_transport", 00:06:34.156 "nvmf_get_targets", 00:06:34.156 "nvmf_delete_target", 00:06:34.156 "nvmf_create_target", 00:06:34.156 "nvmf_subsystem_allow_any_host", 00:06:34.156 "nvmf_subsystem_set_keys", 00:06:34.156 "nvmf_subsystem_remove_host", 00:06:34.156 "nvmf_subsystem_add_host", 00:06:34.156 "nvmf_ns_remove_host", 00:06:34.156 "nvmf_ns_add_host", 00:06:34.156 "nvmf_subsystem_remove_ns", 00:06:34.156 "nvmf_subsystem_set_ns_ana_group", 00:06:34.156 "nvmf_subsystem_add_ns", 00:06:34.156 "nvmf_subsystem_listener_set_ana_state", 00:06:34.156 "nvmf_discovery_get_referrals", 00:06:34.156 "nvmf_discovery_remove_referral", 00:06:34.156 "nvmf_discovery_add_referral", 00:06:34.156 "nvmf_subsystem_remove_listener", 00:06:34.156 "nvmf_subsystem_add_listener", 00:06:34.156 "nvmf_delete_subsystem", 00:06:34.156 "nvmf_create_subsystem", 00:06:34.156 "nvmf_get_subsystems", 00:06:34.156 "env_dpdk_get_mem_stats", 00:06:34.156 "nbd_get_disks", 00:06:34.156 "nbd_stop_disk", 00:06:34.156 "nbd_start_disk", 00:06:34.156 "ublk_recover_disk", 00:06:34.156 "ublk_get_disks", 00:06:34.156 "ublk_stop_disk", 00:06:34.156 "ublk_start_disk", 00:06:34.156 "ublk_destroy_target", 00:06:34.156 "ublk_create_target", 00:06:34.156 "virtio_blk_create_transport", 00:06:34.156 "virtio_blk_get_transports", 00:06:34.156 "vhost_controller_set_coalescing", 00:06:34.156 "vhost_get_controllers", 00:06:34.156 "vhost_delete_controller", 00:06:34.156 "vhost_create_blk_controller", 00:06:34.156 "vhost_scsi_controller_remove_target", 00:06:34.156 "vhost_scsi_controller_add_target", 00:06:34.156 "vhost_start_scsi_controller", 00:06:34.156 "vhost_create_scsi_controller", 00:06:34.156 "thread_set_cpumask", 00:06:34.156 "scheduler_set_options", 00:06:34.156 "framework_get_governor", 00:06:34.156 "framework_get_scheduler", 00:06:34.156 "framework_set_scheduler", 00:06:34.156 "framework_get_reactors", 00:06:34.156 "thread_get_io_channels", 00:06:34.156 "thread_get_pollers", 00:06:34.156 "thread_get_stats", 00:06:34.156 "framework_monitor_context_switch", 00:06:34.156 "spdk_kill_instance", 00:06:34.156 "log_enable_timestamps", 00:06:34.156 "log_get_flags", 00:06:34.156 "log_clear_flag", 00:06:34.156 "log_set_flag", 00:06:34.156 "log_get_level", 00:06:34.156 "log_set_level", 00:06:34.156 "log_get_print_level", 00:06:34.156 "log_set_print_level", 00:06:34.156 "framework_enable_cpumask_locks", 00:06:34.156 "framework_disable_cpumask_locks", 00:06:34.156 "framework_wait_init", 00:06:34.156 "framework_start_init", 00:06:34.156 "scsi_get_devices", 00:06:34.156 "bdev_get_histogram", 00:06:34.156 "bdev_enable_histogram", 00:06:34.156 "bdev_set_qos_limit", 00:06:34.156 "bdev_set_qd_sampling_period", 00:06:34.156 "bdev_get_bdevs", 00:06:34.156 "bdev_reset_iostat", 00:06:34.156 "bdev_get_iostat", 00:06:34.156 "bdev_examine", 00:06:34.156 "bdev_wait_for_examine", 00:06:34.156 "bdev_set_options", 00:06:34.156 "accel_get_stats", 00:06:34.156 "accel_set_options", 00:06:34.156 "accel_set_driver", 00:06:34.156 "accel_crypto_key_destroy", 00:06:34.156 "accel_crypto_keys_get", 00:06:34.156 "accel_crypto_key_create", 00:06:34.156 "accel_assign_opc", 00:06:34.156 "accel_get_module_info", 00:06:34.156 "accel_get_opc_assignments", 00:06:34.156 "vmd_rescan", 00:06:34.156 "vmd_remove_device", 00:06:34.156 "vmd_enable", 00:06:34.156 "sock_get_default_impl", 00:06:34.156 "sock_set_default_impl", 00:06:34.156 "sock_impl_set_options", 00:06:34.156 "sock_impl_get_options", 00:06:34.156 "iobuf_get_stats", 00:06:34.156 "iobuf_set_options", 00:06:34.156 "keyring_get_keys", 00:06:34.156 "vfu_tgt_set_base_path", 00:06:34.156 "framework_get_pci_devices", 00:06:34.156 "framework_get_config", 00:06:34.156 "framework_get_subsystems", 00:06:34.156 "fsdev_set_opts", 00:06:34.156 "fsdev_get_opts", 00:06:34.156 "trace_get_info", 00:06:34.156 "trace_get_tpoint_group_mask", 00:06:34.156 "trace_disable_tpoint_group", 00:06:34.156 "trace_enable_tpoint_group", 00:06:34.156 "trace_clear_tpoint_mask", 00:06:34.156 "trace_set_tpoint_mask", 00:06:34.156 "notify_get_notifications", 00:06:34.156 "notify_get_types", 00:06:34.156 "spdk_get_version", 00:06:34.156 "rpc_get_methods" 00:06:34.156 ] 00:06:34.156 12:26:39 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:34.156 12:26:39 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:34.156 12:26:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:34.415 12:26:39 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:34.415 12:26:39 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 70981 00:06:34.415 12:26:39 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 70981 ']' 00:06:34.415 12:26:39 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 70981 00:06:34.415 12:26:39 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:34.415 12:26:39 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:34.415 12:26:39 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70981 00:06:34.415 12:26:39 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:34.415 12:26:39 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:34.415 killing process with pid 70981 00:06:34.415 12:26:39 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70981' 00:06:34.415 12:26:39 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 70981 00:06:34.415 12:26:39 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 70981 00:06:34.675 00:06:34.675 real 0m1.120s 00:06:34.675 user 0m1.908s 00:06:34.675 sys 0m0.357s 00:06:34.675 12:26:39 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.675 ************************************ 00:06:34.675 END TEST spdkcli_tcp 00:06:34.675 12:26:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:34.675 ************************************ 00:06:34.675 12:26:39 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:34.675 12:26:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.675 12:26:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.675 12:26:39 -- common/autotest_common.sh@10 -- # set +x 00:06:34.675 ************************************ 00:06:34.675 START TEST dpdk_mem_utility 00:06:34.675 ************************************ 00:06:34.675 12:26:39 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:34.675 * Looking for test storage... 00:06:34.675 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:34.675 12:26:39 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:34.675 12:26:39 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:06:34.675 12:26:39 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:34.675 12:26:39 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:34.675 12:26:39 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:34.675 12:26:39 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:34.675 12:26:39 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:34.675 12:26:39 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.675 12:26:39 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:34.675 12:26:39 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:34.675 12:26:39 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:34.675 12:26:39 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:34.675 12:26:39 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:34.675 12:26:39 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:34.675 12:26:39 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:34.675 12:26:39 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:34.675 12:26:39 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:34.675 12:26:39 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:34.675 12:26:39 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.675 12:26:39 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:34.675 12:26:39 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:34.675 12:26:39 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.675 12:26:39 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:34.675 12:26:39 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:34.675 12:26:39 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:34.675 12:26:39 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:34.675 12:26:39 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.675 12:26:39 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:34.934 12:26:39 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:34.934 12:26:39 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:34.934 12:26:39 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:34.934 12:26:39 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:34.934 12:26:39 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.934 12:26:39 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:34.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.934 --rc genhtml_branch_coverage=1 00:06:34.934 --rc genhtml_function_coverage=1 00:06:34.934 --rc genhtml_legend=1 00:06:34.934 --rc geninfo_all_blocks=1 00:06:34.934 --rc geninfo_unexecuted_blocks=1 00:06:34.934 00:06:34.934 ' 00:06:34.934 12:26:39 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:34.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.934 --rc genhtml_branch_coverage=1 00:06:34.934 --rc genhtml_function_coverage=1 00:06:34.934 --rc genhtml_legend=1 00:06:34.934 --rc geninfo_all_blocks=1 00:06:34.934 --rc geninfo_unexecuted_blocks=1 00:06:34.934 00:06:34.934 ' 00:06:34.934 12:26:39 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:34.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.934 --rc genhtml_branch_coverage=1 00:06:34.934 --rc genhtml_function_coverage=1 00:06:34.934 --rc genhtml_legend=1 00:06:34.934 --rc geninfo_all_blocks=1 00:06:34.934 --rc geninfo_unexecuted_blocks=1 00:06:34.934 00:06:34.934 ' 00:06:34.934 12:26:39 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:34.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.934 --rc genhtml_branch_coverage=1 00:06:34.934 --rc genhtml_function_coverage=1 00:06:34.934 --rc genhtml_legend=1 00:06:34.934 --rc geninfo_all_blocks=1 00:06:34.934 --rc geninfo_unexecuted_blocks=1 00:06:34.934 00:06:34.934 ' 00:06:34.934 12:26:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:34.934 12:26:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=71067 00:06:34.934 12:26:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:34.934 12:26:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 71067 00:06:34.934 12:26:39 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 71067 ']' 00:06:34.934 12:26:39 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.934 12:26:39 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:34.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.934 12:26:39 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.934 12:26:39 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:34.934 12:26:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:34.934 [2024-11-19 12:26:40.004040] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:34.934 [2024-11-19 12:26:40.004147] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71067 ] 00:06:34.934 [2024-11-19 12:26:40.140226] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.934 [2024-11-19 12:26:40.172582] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.194 [2024-11-19 12:26:40.206664] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:35.194 12:26:40 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:35.194 12:26:40 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:35.194 12:26:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:35.194 12:26:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:35.194 12:26:40 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.194 12:26:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:35.194 { 00:06:35.194 "filename": "/tmp/spdk_mem_dump.txt" 00:06:35.194 } 00:06:35.194 12:26:40 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.194 12:26:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:35.194 DPDK memory size 860.000000 MiB in 1 heap(s) 00:06:35.194 1 heaps totaling size 860.000000 MiB 00:06:35.194 size: 860.000000 MiB heap id: 0 00:06:35.194 end heaps---------- 00:06:35.194 9 mempools totaling size 642.649841 MiB 00:06:35.194 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:35.194 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:35.194 size: 92.545471 MiB name: bdev_io_71067 00:06:35.194 size: 51.011292 MiB name: evtpool_71067 00:06:35.194 size: 50.003479 MiB name: msgpool_71067 00:06:35.194 size: 36.509338 MiB name: fsdev_io_71067 00:06:35.194 size: 21.763794 MiB name: PDU_Pool 00:06:35.194 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:35.194 size: 0.026123 MiB name: Session_Pool 00:06:35.194 end mempools------- 00:06:35.194 6 memzones totaling size 4.142822 MiB 00:06:35.194 size: 1.000366 MiB name: RG_ring_0_71067 00:06:35.194 size: 1.000366 MiB name: RG_ring_1_71067 00:06:35.194 size: 1.000366 MiB name: RG_ring_4_71067 00:06:35.194 size: 1.000366 MiB name: RG_ring_5_71067 00:06:35.194 size: 0.125366 MiB name: RG_ring_2_71067 00:06:35.194 size: 0.015991 MiB name: RG_ring_3_71067 00:06:35.194 end memzones------- 00:06:35.194 12:26:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:35.455 heap id: 0 total size: 860.000000 MiB number of busy elements: 316 number of free elements: 16 00:06:35.455 list of free elements. size: 13.934875 MiB 00:06:35.455 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:35.455 element at address: 0x200000800000 with size: 1.996948 MiB 00:06:35.455 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:06:35.455 element at address: 0x20001be00000 with size: 0.999878 MiB 00:06:35.455 element at address: 0x200034a00000 with size: 0.994446 MiB 00:06:35.455 element at address: 0x200009600000 with size: 0.959839 MiB 00:06:35.455 element at address: 0x200015e00000 with size: 0.954285 MiB 00:06:35.455 element at address: 0x20001c000000 with size: 0.936584 MiB 00:06:35.455 element at address: 0x200000200000 with size: 0.835022 MiB 00:06:35.455 element at address: 0x20001d800000 with size: 0.566956 MiB 00:06:35.455 element at address: 0x20000d800000 with size: 0.489258 MiB 00:06:35.455 element at address: 0x200003e00000 with size: 0.487732 MiB 00:06:35.455 element at address: 0x20001c200000 with size: 0.485657 MiB 00:06:35.455 element at address: 0x200007000000 with size: 0.480286 MiB 00:06:35.455 element at address: 0x20002ac00000 with size: 0.395752 MiB 00:06:35.455 element at address: 0x200003a00000 with size: 0.352844 MiB 00:06:35.455 list of standard malloc elements. size: 199.268433 MiB 00:06:35.455 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:06:35.455 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:06:35.455 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:06:35.455 element at address: 0x20001befff80 with size: 1.000122 MiB 00:06:35.455 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:06:35.455 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:35.455 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:06:35.455 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:35.455 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:06:35.455 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:35.455 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:35.455 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:35.455 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:35.455 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:35.455 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:35.455 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:35.455 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:35.455 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:35.456 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:35.456 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:35.456 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:35.456 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:35.456 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:35.456 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:35.456 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:06:35.456 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:06:35.456 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:06:35.456 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:06:35.456 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:06:35.456 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:06:35.456 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:06:35.456 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:06:35.456 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:06:35.456 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:06:35.456 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:06:35.456 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:35.456 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:35.456 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:35.456 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:35.456 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:35.456 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:35.456 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:35.456 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:35.456 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:35.456 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:35.456 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:35.456 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:35.456 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:35.456 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:35.456 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:35.456 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:35.456 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003a5a540 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003a5ea00 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003a7ecc0 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003a7ed80 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003aff880 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003e7cdc0 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003e7ce80 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003e7cf40 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003e7d000 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003e7d0c0 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003e7d180 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20000707af40 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20000707b000 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20000707b180 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20000707b240 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20000707b300 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20000707b480 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20000707b540 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20000707b600 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:06:35.456 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:06:35.456 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20000d87d400 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:06:35.456 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20001d891240 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20001d891300 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20001d8913c0 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20001d891480 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20001d891540 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20001d891600 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20001d8916c0 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20001d891780 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20001d891840 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20001d891900 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20001d892080 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20001d892140 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20001d892200 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20001d892380 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20001d892440 with size: 0.000183 MiB 00:06:35.456 element at address: 0x20001d892500 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d892680 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d892740 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d892800 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d892980 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d893040 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d893100 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d893280 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d893340 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d893400 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d893580 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d893640 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d893700 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d893880 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d893940 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d894000 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d894180 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d894240 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d894300 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d894480 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d894540 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d894600 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d894780 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d894840 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d894900 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d895080 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d895140 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d895200 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d895380 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20001d895440 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac65500 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac655c0 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6c1c0 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6c3c0 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6c480 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:06:35.457 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:06:35.457 list of memzone associated elements. size: 646.796692 MiB 00:06:35.457 element at address: 0x20001d895500 with size: 211.416748 MiB 00:06:35.457 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:35.457 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:06:35.457 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:35.457 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:06:35.457 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_71067_0 00:06:35.458 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:35.458 associated memzone info: size: 48.002930 MiB name: MP_evtpool_71067_0 00:06:35.458 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:35.458 associated memzone info: size: 48.002930 MiB name: MP_msgpool_71067_0 00:06:35.458 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:06:35.458 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_71067_0 00:06:35.458 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:06:35.458 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:35.458 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:06:35.458 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:35.458 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:35.458 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_71067 00:06:35.458 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:35.458 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_71067 00:06:35.458 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:35.458 associated memzone info: size: 1.007996 MiB name: MP_evtpool_71067 00:06:35.458 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:06:35.458 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:35.458 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:06:35.458 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:35.458 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:06:35.458 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:35.458 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:06:35.458 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:35.458 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:35.458 associated memzone info: size: 1.000366 MiB name: RG_ring_0_71067 00:06:35.458 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:35.458 associated memzone info: size: 1.000366 MiB name: RG_ring_1_71067 00:06:35.458 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:06:35.458 associated memzone info: size: 1.000366 MiB name: RG_ring_4_71067 00:06:35.458 element at address: 0x200034afe940 with size: 1.000488 MiB 00:06:35.458 associated memzone info: size: 1.000366 MiB name: RG_ring_5_71067 00:06:35.458 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:06:35.458 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_71067 00:06:35.458 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:06:35.458 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_71067 00:06:35.458 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:06:35.458 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:35.458 element at address: 0x20000707b780 with size: 0.500488 MiB 00:06:35.458 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:35.458 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:06:35.458 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:35.458 element at address: 0x200003a5eac0 with size: 0.125488 MiB 00:06:35.458 associated memzone info: size: 0.125366 MiB name: RG_ring_2_71067 00:06:35.458 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:06:35.458 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:35.458 element at address: 0x20002ac65680 with size: 0.023743 MiB 00:06:35.458 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:35.458 element at address: 0x200003a5a800 with size: 0.016113 MiB 00:06:35.458 associated memzone info: size: 0.015991 MiB name: RG_ring_3_71067 00:06:35.458 element at address: 0x20002ac6b7c0 with size: 0.002441 MiB 00:06:35.458 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:35.458 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:06:35.458 associated memzone info: size: 0.000183 MiB name: MP_msgpool_71067 00:06:35.458 element at address: 0x200003aff940 with size: 0.000305 MiB 00:06:35.458 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_71067 00:06:35.458 element at address: 0x200003a5a600 with size: 0.000305 MiB 00:06:35.458 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_71067 00:06:35.458 element at address: 0x20002ac6c280 with size: 0.000305 MiB 00:06:35.458 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:35.458 12:26:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:35.458 12:26:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 71067 00:06:35.458 12:26:40 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 71067 ']' 00:06:35.458 12:26:40 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 71067 00:06:35.458 12:26:40 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:35.458 12:26:40 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:35.458 12:26:40 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71067 00:06:35.458 12:26:40 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:35.458 12:26:40 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:35.458 killing process with pid 71067 00:06:35.458 12:26:40 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71067' 00:06:35.458 12:26:40 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 71067 00:06:35.458 12:26:40 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 71067 00:06:35.718 00:06:35.718 real 0m0.984s 00:06:35.718 user 0m1.051s 00:06:35.718 sys 0m0.305s 00:06:35.718 12:26:40 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.718 12:26:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:35.718 ************************************ 00:06:35.718 END TEST dpdk_mem_utility 00:06:35.718 ************************************ 00:06:35.718 12:26:40 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:35.718 12:26:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:35.718 12:26:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.718 12:26:40 -- common/autotest_common.sh@10 -- # set +x 00:06:35.718 ************************************ 00:06:35.718 START TEST event 00:06:35.718 ************************************ 00:06:35.718 12:26:40 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:35.718 * Looking for test storage... 00:06:35.718 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:35.718 12:26:40 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:35.718 12:26:40 event -- common/autotest_common.sh@1681 -- # lcov --version 00:06:35.718 12:26:40 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:35.718 12:26:40 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:35.718 12:26:40 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:35.718 12:26:40 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:35.718 12:26:40 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:35.718 12:26:40 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.718 12:26:40 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:35.718 12:26:40 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:35.718 12:26:40 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:35.718 12:26:40 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:35.718 12:26:40 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:35.718 12:26:40 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:35.718 12:26:40 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:35.718 12:26:40 event -- scripts/common.sh@344 -- # case "$op" in 00:06:35.718 12:26:40 event -- scripts/common.sh@345 -- # : 1 00:06:35.718 12:26:40 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:35.718 12:26:40 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.718 12:26:40 event -- scripts/common.sh@365 -- # decimal 1 00:06:35.718 12:26:40 event -- scripts/common.sh@353 -- # local d=1 00:06:35.718 12:26:40 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:35.718 12:26:40 event -- scripts/common.sh@355 -- # echo 1 00:06:35.718 12:26:40 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:35.718 12:26:40 event -- scripts/common.sh@366 -- # decimal 2 00:06:35.718 12:26:40 event -- scripts/common.sh@353 -- # local d=2 00:06:35.718 12:26:40 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:35.718 12:26:40 event -- scripts/common.sh@355 -- # echo 2 00:06:35.718 12:26:40 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:35.718 12:26:40 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:35.718 12:26:40 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:35.718 12:26:40 event -- scripts/common.sh@368 -- # return 0 00:06:35.718 12:26:40 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:35.718 12:26:40 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:35.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.718 --rc genhtml_branch_coverage=1 00:06:35.718 --rc genhtml_function_coverage=1 00:06:35.718 --rc genhtml_legend=1 00:06:35.718 --rc geninfo_all_blocks=1 00:06:35.718 --rc geninfo_unexecuted_blocks=1 00:06:35.718 00:06:35.718 ' 00:06:35.718 12:26:40 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:35.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.718 --rc genhtml_branch_coverage=1 00:06:35.718 --rc genhtml_function_coverage=1 00:06:35.718 --rc genhtml_legend=1 00:06:35.718 --rc geninfo_all_blocks=1 00:06:35.718 --rc geninfo_unexecuted_blocks=1 00:06:35.718 00:06:35.718 ' 00:06:35.718 12:26:40 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:35.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.718 --rc genhtml_branch_coverage=1 00:06:35.718 --rc genhtml_function_coverage=1 00:06:35.718 --rc genhtml_legend=1 00:06:35.718 --rc geninfo_all_blocks=1 00:06:35.718 --rc geninfo_unexecuted_blocks=1 00:06:35.718 00:06:35.718 ' 00:06:35.718 12:26:40 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:35.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.718 --rc genhtml_branch_coverage=1 00:06:35.718 --rc genhtml_function_coverage=1 00:06:35.718 --rc genhtml_legend=1 00:06:35.718 --rc geninfo_all_blocks=1 00:06:35.718 --rc geninfo_unexecuted_blocks=1 00:06:35.718 00:06:35.718 ' 00:06:35.718 12:26:40 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:35.718 12:26:40 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:35.718 12:26:40 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:35.718 12:26:40 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:35.718 12:26:40 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.718 12:26:40 event -- common/autotest_common.sh@10 -- # set +x 00:06:35.718 ************************************ 00:06:35.718 START TEST event_perf 00:06:35.718 ************************************ 00:06:35.719 12:26:40 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:35.978 Running I/O for 1 seconds...[2024-11-19 12:26:40.989136] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:35.978 [2024-11-19 12:26:40.989245] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71139 ] 00:06:35.978 [2024-11-19 12:26:41.127408] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:35.978 [2024-11-19 12:26:41.160631] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.978 Running I/O for 1 seconds...[2024-11-19 12:26:41.160734] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.978 [2024-11-19 12:26:41.160855] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:35.978 [2024-11-19 12:26:41.160861] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.356 00:06:37.356 lcore 0: 202934 00:06:37.356 lcore 1: 202934 00:06:37.356 lcore 2: 202934 00:06:37.356 lcore 3: 202933 00:06:37.356 done. 00:06:37.356 00:06:37.356 real 0m1.239s 00:06:37.356 user 0m4.076s 00:06:37.356 sys 0m0.045s 00:06:37.356 12:26:42 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.356 12:26:42 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:37.356 ************************************ 00:06:37.356 END TEST event_perf 00:06:37.356 ************************************ 00:06:37.356 12:26:42 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:37.356 12:26:42 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:37.356 12:26:42 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.356 12:26:42 event -- common/autotest_common.sh@10 -- # set +x 00:06:37.356 ************************************ 00:06:37.356 START TEST event_reactor 00:06:37.356 ************************************ 00:06:37.356 12:26:42 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:37.356 [2024-11-19 12:26:42.278434] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:37.356 [2024-11-19 12:26:42.278533] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71177 ] 00:06:37.356 [2024-11-19 12:26:42.416384] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.356 [2024-11-19 12:26:42.452253] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.294 test_start 00:06:38.294 oneshot 00:06:38.294 tick 100 00:06:38.294 tick 100 00:06:38.294 tick 250 00:06:38.294 tick 100 00:06:38.294 tick 100 00:06:38.294 tick 250 00:06:38.294 tick 100 00:06:38.294 tick 500 00:06:38.294 tick 100 00:06:38.294 tick 100 00:06:38.294 tick 250 00:06:38.294 tick 100 00:06:38.294 tick 100 00:06:38.294 test_end 00:06:38.294 00:06:38.294 real 0m1.247s 00:06:38.294 user 0m1.099s 00:06:38.294 sys 0m0.043s 00:06:38.294 12:26:43 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.294 12:26:43 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:38.294 ************************************ 00:06:38.294 END TEST event_reactor 00:06:38.294 ************************************ 00:06:38.294 12:26:43 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:38.294 12:26:43 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:38.294 12:26:43 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.294 12:26:43 event -- common/autotest_common.sh@10 -- # set +x 00:06:38.554 ************************************ 00:06:38.554 START TEST event_reactor_perf 00:06:38.554 ************************************ 00:06:38.554 12:26:43 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:38.554 [2024-11-19 12:26:43.575511] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:38.554 [2024-11-19 12:26:43.575597] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71213 ] 00:06:38.554 [2024-11-19 12:26:43.706094] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.554 [2024-11-19 12:26:43.740935] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.932 test_start 00:06:39.932 test_end 00:06:39.932 Performance: 435299 events per second 00:06:39.932 00:06:39.932 real 0m1.232s 00:06:39.932 user 0m1.087s 00:06:39.932 sys 0m0.040s 00:06:39.932 12:26:44 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.932 12:26:44 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:39.932 ************************************ 00:06:39.932 END TEST event_reactor_perf 00:06:39.932 ************************************ 00:06:39.932 12:26:44 event -- event/event.sh@49 -- # uname -s 00:06:39.932 12:26:44 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:39.932 12:26:44 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:39.932 12:26:44 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:39.932 12:26:44 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.932 12:26:44 event -- common/autotest_common.sh@10 -- # set +x 00:06:39.932 ************************************ 00:06:39.932 START TEST event_scheduler 00:06:39.932 ************************************ 00:06:39.932 12:26:44 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:39.932 * Looking for test storage... 00:06:39.932 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:39.932 12:26:44 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:39.932 12:26:44 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:06:39.932 12:26:44 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:39.932 12:26:45 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:39.932 12:26:45 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.932 12:26:45 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.932 12:26:45 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.932 12:26:45 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.932 12:26:45 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.932 12:26:45 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.932 12:26:45 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.932 12:26:45 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.932 12:26:45 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.932 12:26:45 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.932 12:26:45 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.932 12:26:45 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:39.932 12:26:45 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:39.932 12:26:45 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.932 12:26:45 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.932 12:26:45 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:39.932 12:26:45 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:39.932 12:26:45 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.932 12:26:45 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:39.932 12:26:45 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.932 12:26:45 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:39.933 12:26:45 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:39.933 12:26:45 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.933 12:26:45 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:39.933 12:26:45 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.933 12:26:45 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.933 12:26:45 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.933 12:26:45 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:39.933 12:26:45 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.933 12:26:45 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:39.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.933 --rc genhtml_branch_coverage=1 00:06:39.933 --rc genhtml_function_coverage=1 00:06:39.933 --rc genhtml_legend=1 00:06:39.933 --rc geninfo_all_blocks=1 00:06:39.933 --rc geninfo_unexecuted_blocks=1 00:06:39.933 00:06:39.933 ' 00:06:39.933 12:26:45 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:39.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.933 --rc genhtml_branch_coverage=1 00:06:39.933 --rc genhtml_function_coverage=1 00:06:39.933 --rc genhtml_legend=1 00:06:39.933 --rc geninfo_all_blocks=1 00:06:39.933 --rc geninfo_unexecuted_blocks=1 00:06:39.933 00:06:39.933 ' 00:06:39.933 12:26:45 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:39.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.933 --rc genhtml_branch_coverage=1 00:06:39.933 --rc genhtml_function_coverage=1 00:06:39.933 --rc genhtml_legend=1 00:06:39.933 --rc geninfo_all_blocks=1 00:06:39.933 --rc geninfo_unexecuted_blocks=1 00:06:39.933 00:06:39.933 ' 00:06:39.933 12:26:45 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:39.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.933 --rc genhtml_branch_coverage=1 00:06:39.933 --rc genhtml_function_coverage=1 00:06:39.933 --rc genhtml_legend=1 00:06:39.933 --rc geninfo_all_blocks=1 00:06:39.933 --rc geninfo_unexecuted_blocks=1 00:06:39.933 00:06:39.933 ' 00:06:39.933 12:26:45 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:39.933 12:26:45 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=71277 00:06:39.933 12:26:45 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:39.933 12:26:45 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:39.933 12:26:45 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 71277 00:06:39.933 12:26:45 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 71277 ']' 00:06:39.933 12:26:45 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.933 12:26:45 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:39.933 12:26:45 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.933 12:26:45 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:39.933 12:26:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:39.933 [2024-11-19 12:26:45.090968] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:39.933 [2024-11-19 12:26:45.091266] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71277 ] 00:06:40.192 [2024-11-19 12:26:45.239751] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:40.192 [2024-11-19 12:26:45.286703] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.192 [2024-11-19 12:26:45.286841] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.193 [2024-11-19 12:26:45.286951] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.193 [2024-11-19 12:26:45.287578] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:40.193 12:26:45 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.193 12:26:45 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:40.193 12:26:45 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:40.193 12:26:45 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.193 12:26:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:40.193 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:40.193 POWER: Cannot set governor of lcore 0 to userspace 00:06:40.193 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:40.193 POWER: Cannot set governor of lcore 0 to performance 00:06:40.193 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:40.193 POWER: Cannot set governor of lcore 0 to userspace 00:06:40.193 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:40.193 POWER: Cannot set governor of lcore 0 to userspace 00:06:40.193 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:40.193 POWER: Unable to set Power Management Environment for lcore 0 00:06:40.193 [2024-11-19 12:26:45.380504] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:40.193 [2024-11-19 12:26:45.380517] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:40.193 [2024-11-19 12:26:45.380526] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:40.193 [2024-11-19 12:26:45.380537] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:40.193 [2024-11-19 12:26:45.380545] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:40.193 [2024-11-19 12:26:45.380551] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:40.193 12:26:45 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.193 12:26:45 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:40.193 12:26:45 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.193 12:26:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:40.193 [2024-11-19 12:26:45.415518] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:40.193 [2024-11-19 12:26:45.431044] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:40.193 12:26:45 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.193 12:26:45 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:40.193 12:26:45 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:40.193 12:26:45 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.193 12:26:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:40.193 ************************************ 00:06:40.193 START TEST scheduler_create_thread 00:06:40.193 ************************************ 00:06:40.193 12:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:40.193 12:26:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:40.193 12:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.193 12:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:40.452 2 00:06:40.452 12:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.452 12:26:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:40.452 12:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.452 12:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:40.452 3 00:06:40.452 12:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.452 12:26:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:40.452 12:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.452 12:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:40.452 4 00:06:40.452 12:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.452 12:26:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:40.452 12:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.452 12:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:40.452 5 00:06:40.452 12:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.452 12:26:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:40.452 12:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.452 12:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:40.452 6 00:06:40.452 12:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.452 12:26:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:40.452 12:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.452 12:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:40.452 7 00:06:40.452 12:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.452 12:26:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:40.452 12:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.452 12:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:40.452 8 00:06:40.452 12:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.452 12:26:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:40.452 12:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.452 12:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:40.452 9 00:06:40.452 12:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.452 12:26:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:40.452 12:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.453 12:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:40.453 10 00:06:40.453 12:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.453 12:26:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:40.453 12:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.453 12:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:40.453 12:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.453 12:26:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:40.453 12:26:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:40.453 12:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.453 12:26:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.391 12:26:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.391 12:26:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:41.391 12:26:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.391 12:26:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.771 12:26:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.771 12:26:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:42.771 12:26:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:42.771 12:26:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.771 12:26:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.709 ************************************ 00:06:43.709 END TEST scheduler_create_thread 00:06:43.709 ************************************ 00:06:43.709 12:26:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.709 00:06:43.709 real 0m3.375s 00:06:43.709 user 0m0.018s 00:06:43.709 sys 0m0.008s 00:06:43.709 12:26:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.709 12:26:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.709 12:26:48 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:43.709 12:26:48 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 71277 00:06:43.709 12:26:48 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 71277 ']' 00:06:43.709 12:26:48 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 71277 00:06:43.709 12:26:48 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:43.709 12:26:48 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:43.709 12:26:48 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71277 00:06:43.709 killing process with pid 71277 00:06:43.709 12:26:48 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:43.709 12:26:48 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:43.709 12:26:48 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71277' 00:06:43.709 12:26:48 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 71277 00:06:43.709 12:26:48 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 71277 00:06:43.968 [2024-11-19 12:26:49.198467] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:44.227 00:06:44.227 real 0m4.545s 00:06:44.227 user 0m7.901s 00:06:44.227 sys 0m0.297s 00:06:44.227 12:26:49 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.227 12:26:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:44.227 ************************************ 00:06:44.227 END TEST event_scheduler 00:06:44.227 ************************************ 00:06:44.227 12:26:49 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:44.227 12:26:49 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:44.227 12:26:49 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:44.227 12:26:49 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.227 12:26:49 event -- common/autotest_common.sh@10 -- # set +x 00:06:44.227 ************************************ 00:06:44.227 START TEST app_repeat 00:06:44.227 ************************************ 00:06:44.227 12:26:49 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:44.227 12:26:49 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.227 12:26:49 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.227 12:26:49 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:44.227 12:26:49 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:44.227 12:26:49 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:44.227 12:26:49 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:44.227 12:26:49 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:44.227 12:26:49 event.app_repeat -- event/event.sh@19 -- # repeat_pid=71374 00:06:44.227 12:26:49 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:44.227 Process app_repeat pid: 71374 00:06:44.227 12:26:49 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:44.227 12:26:49 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 71374' 00:06:44.227 12:26:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:44.227 spdk_app_start Round 0 00:06:44.227 12:26:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:44.227 12:26:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71374 /var/tmp/spdk-nbd.sock 00:06:44.227 12:26:49 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71374 ']' 00:06:44.227 12:26:49 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:44.227 12:26:49 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:44.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:44.227 12:26:49 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:44.227 12:26:49 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:44.227 12:26:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:44.227 [2024-11-19 12:26:49.478204] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:44.227 [2024-11-19 12:26:49.478486] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71374 ] 00:06:44.486 [2024-11-19 12:26:49.613222] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:44.486 [2024-11-19 12:26:49.649335] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.486 [2024-11-19 12:26:49.649342] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.486 [2024-11-19 12:26:49.677575] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:44.486 12:26:49 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:44.486 12:26:49 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:44.486 12:26:49 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:45.053 Malloc0 00:06:45.053 12:26:50 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:45.053 Malloc1 00:06:45.313 12:26:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:45.313 12:26:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.313 12:26:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:45.313 12:26:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:45.313 12:26:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.313 12:26:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:45.313 12:26:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:45.313 12:26:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.313 12:26:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:45.313 12:26:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:45.313 12:26:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.313 12:26:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:45.313 12:26:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:45.313 12:26:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:45.313 12:26:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.313 12:26:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:45.572 /dev/nbd0 00:06:45.572 12:26:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:45.572 12:26:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:45.572 12:26:50 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:45.572 12:26:50 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:45.572 12:26:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:45.572 12:26:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:45.572 12:26:50 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:45.572 12:26:50 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:45.572 12:26:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:45.572 12:26:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:45.572 12:26:50 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:45.572 1+0 records in 00:06:45.572 1+0 records out 00:06:45.572 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344332 s, 11.9 MB/s 00:06:45.572 12:26:50 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:45.572 12:26:50 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:45.572 12:26:50 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:45.572 12:26:50 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:45.572 12:26:50 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:45.572 12:26:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.572 12:26:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.572 12:26:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:45.831 /dev/nbd1 00:06:45.831 12:26:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:45.831 12:26:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:45.831 12:26:50 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:45.831 12:26:50 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:45.831 12:26:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:45.831 12:26:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:45.831 12:26:50 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:45.831 12:26:50 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:45.831 12:26:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:45.831 12:26:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:45.831 12:26:50 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:45.831 1+0 records in 00:06:45.831 1+0 records out 00:06:45.831 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025687 s, 15.9 MB/s 00:06:45.831 12:26:50 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:45.831 12:26:50 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:45.831 12:26:50 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:45.831 12:26:50 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:45.831 12:26:50 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:45.831 12:26:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.831 12:26:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.831 12:26:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:45.831 12:26:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.831 12:26:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:46.090 12:26:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:46.090 { 00:06:46.090 "nbd_device": "/dev/nbd0", 00:06:46.090 "bdev_name": "Malloc0" 00:06:46.090 }, 00:06:46.090 { 00:06:46.090 "nbd_device": "/dev/nbd1", 00:06:46.090 "bdev_name": "Malloc1" 00:06:46.090 } 00:06:46.090 ]' 00:06:46.090 12:26:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:46.090 { 00:06:46.090 "nbd_device": "/dev/nbd0", 00:06:46.090 "bdev_name": "Malloc0" 00:06:46.090 }, 00:06:46.090 { 00:06:46.090 "nbd_device": "/dev/nbd1", 00:06:46.090 "bdev_name": "Malloc1" 00:06:46.090 } 00:06:46.090 ]' 00:06:46.090 12:26:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.090 12:26:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:46.090 /dev/nbd1' 00:06:46.090 12:26:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:46.090 /dev/nbd1' 00:06:46.090 12:26:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.090 12:26:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:46.090 12:26:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:46.090 12:26:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:46.090 12:26:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:46.090 12:26:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:46.090 12:26:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.090 12:26:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:46.090 12:26:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:46.090 12:26:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:46.090 12:26:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:46.090 12:26:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:46.090 256+0 records in 00:06:46.090 256+0 records out 00:06:46.090 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106159 s, 98.8 MB/s 00:06:46.090 12:26:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.090 12:26:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:46.090 256+0 records in 00:06:46.090 256+0 records out 00:06:46.090 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209132 s, 50.1 MB/s 00:06:46.090 12:26:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.090 12:26:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:46.090 256+0 records in 00:06:46.090 256+0 records out 00:06:46.090 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.025522 s, 41.1 MB/s 00:06:46.090 12:26:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:46.090 12:26:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.090 12:26:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:46.090 12:26:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:46.090 12:26:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:46.090 12:26:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:46.090 12:26:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:46.090 12:26:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:46.090 12:26:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:46.349 12:26:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:46.349 12:26:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:46.349 12:26:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:46.349 12:26:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:46.349 12:26:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.349 12:26:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.349 12:26:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:46.349 12:26:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:46.349 12:26:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.349 12:26:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:46.607 12:26:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:46.607 12:26:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:46.607 12:26:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:46.607 12:26:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.607 12:26:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.607 12:26:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:46.607 12:26:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:46.607 12:26:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.607 12:26:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.607 12:26:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:46.866 12:26:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:46.866 12:26:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:46.866 12:26:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:46.866 12:26:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.866 12:26:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.866 12:26:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:46.866 12:26:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:46.866 12:26:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.866 12:26:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:46.866 12:26:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.866 12:26:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:47.128 12:26:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:47.128 12:26:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:47.128 12:26:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:47.128 12:26:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:47.128 12:26:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:47.128 12:26:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:47.128 12:26:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:47.128 12:26:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:47.128 12:26:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:47.128 12:26:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:47.128 12:26:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:47.128 12:26:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:47.128 12:26:52 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:47.414 12:26:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:47.414 [2024-11-19 12:26:52.658384] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:47.685 [2024-11-19 12:26:52.693685] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.685 [2024-11-19 12:26:52.693688] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.685 [2024-11-19 12:26:52.721242] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:47.685 [2024-11-19 12:26:52.721330] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:47.685 [2024-11-19 12:26:52.721343] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:51.003 spdk_app_start Round 1 00:06:51.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:51.003 12:26:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:51.003 12:26:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:51.003 12:26:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71374 /var/tmp/spdk-nbd.sock 00:06:51.003 12:26:55 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71374 ']' 00:06:51.003 12:26:55 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:51.003 12:26:55 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:51.003 12:26:55 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:51.003 12:26:55 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:51.003 12:26:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:51.003 12:26:55 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:51.003 12:26:55 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:51.003 12:26:55 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.003 Malloc0 00:06:51.003 12:26:56 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.262 Malloc1 00:06:51.262 12:26:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:51.262 12:26:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.262 12:26:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.262 12:26:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:51.262 12:26:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.262 12:26:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:51.262 12:26:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:51.262 12:26:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.262 12:26:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.262 12:26:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:51.262 12:26:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.262 12:26:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:51.262 12:26:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:51.262 12:26:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:51.262 12:26:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.262 12:26:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:51.521 /dev/nbd0 00:06:51.522 12:26:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:51.522 12:26:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:51.522 12:26:56 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:51.522 12:26:56 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:51.522 12:26:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:51.522 12:26:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:51.522 12:26:56 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:51.522 12:26:56 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:51.522 12:26:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:51.522 12:26:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:51.522 12:26:56 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:51.522 1+0 records in 00:06:51.522 1+0 records out 00:06:51.522 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272609 s, 15.0 MB/s 00:06:51.522 12:26:56 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:51.522 12:26:56 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:51.522 12:26:56 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:51.522 12:26:56 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:51.522 12:26:56 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:51.522 12:26:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.522 12:26:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.522 12:26:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:51.782 /dev/nbd1 00:06:51.782 12:26:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:51.782 12:26:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:51.782 12:26:56 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:51.782 12:26:56 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:51.782 12:26:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:51.782 12:26:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:51.782 12:26:56 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:51.782 12:26:56 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:51.782 12:26:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:51.782 12:26:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:51.782 12:26:56 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:51.782 1+0 records in 00:06:51.782 1+0 records out 00:06:51.782 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000256477 s, 16.0 MB/s 00:06:51.782 12:26:56 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:51.782 12:26:56 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:51.782 12:26:56 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:51.782 12:26:56 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:51.782 12:26:56 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:51.782 12:26:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.782 12:26:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.782 12:26:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:51.782 12:26:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.782 12:26:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.041 12:26:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:52.041 { 00:06:52.041 "nbd_device": "/dev/nbd0", 00:06:52.041 "bdev_name": "Malloc0" 00:06:52.041 }, 00:06:52.041 { 00:06:52.041 "nbd_device": "/dev/nbd1", 00:06:52.041 "bdev_name": "Malloc1" 00:06:52.041 } 00:06:52.041 ]' 00:06:52.042 12:26:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:52.042 { 00:06:52.042 "nbd_device": "/dev/nbd0", 00:06:52.042 "bdev_name": "Malloc0" 00:06:52.042 }, 00:06:52.042 { 00:06:52.042 "nbd_device": "/dev/nbd1", 00:06:52.042 "bdev_name": "Malloc1" 00:06:52.042 } 00:06:52.042 ]' 00:06:52.042 12:26:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.042 12:26:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:52.042 /dev/nbd1' 00:06:52.302 12:26:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:52.302 /dev/nbd1' 00:06:52.302 12:26:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.302 12:26:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:52.302 12:26:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:52.302 12:26:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:52.302 12:26:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:52.302 12:26:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:52.302 12:26:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.302 12:26:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.302 12:26:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:52.302 12:26:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:52.302 12:26:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:52.302 12:26:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:52.302 256+0 records in 00:06:52.302 256+0 records out 00:06:52.302 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101785 s, 103 MB/s 00:06:52.302 12:26:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.302 12:26:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:52.302 256+0 records in 00:06:52.302 256+0 records out 00:06:52.302 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253044 s, 41.4 MB/s 00:06:52.302 12:26:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.302 12:26:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:52.302 256+0 records in 00:06:52.302 256+0 records out 00:06:52.302 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0259225 s, 40.5 MB/s 00:06:52.302 12:26:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:52.302 12:26:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.302 12:26:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.302 12:26:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:52.302 12:26:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:52.302 12:26:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:52.302 12:26:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:52.302 12:26:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.302 12:26:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:52.302 12:26:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.302 12:26:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:52.302 12:26:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:52.302 12:26:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:52.302 12:26:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.302 12:26:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.302 12:26:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:52.302 12:26:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:52.302 12:26:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.302 12:26:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:52.561 12:26:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:52.561 12:26:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:52.561 12:26:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:52.561 12:26:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.561 12:26:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.561 12:26:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:52.561 12:26:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:52.561 12:26:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.561 12:26:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.561 12:26:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:52.820 12:26:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:52.820 12:26:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:52.820 12:26:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:52.820 12:26:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.820 12:26:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.820 12:26:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:52.820 12:26:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:52.820 12:26:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.820 12:26:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:52.820 12:26:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.820 12:26:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:53.078 12:26:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:53.078 12:26:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:53.078 12:26:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:53.337 12:26:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:53.337 12:26:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:53.337 12:26:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:53.337 12:26:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:53.337 12:26:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:53.337 12:26:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:53.337 12:26:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:53.337 12:26:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:53.337 12:26:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:53.337 12:26:58 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:53.595 12:26:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:53.595 [2024-11-19 12:26:58.772343] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:53.595 [2024-11-19 12:26:58.809853] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.595 [2024-11-19 12:26:58.809863] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.595 [2024-11-19 12:26:58.840117] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:53.595 [2024-11-19 12:26:58.840218] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:53.595 [2024-11-19 12:26:58.840231] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:56.893 spdk_app_start Round 2 00:06:56.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:56.893 12:27:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:56.893 12:27:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:56.893 12:27:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71374 /var/tmp/spdk-nbd.sock 00:06:56.893 12:27:01 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71374 ']' 00:06:56.893 12:27:01 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:56.893 12:27:01 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:56.893 12:27:01 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:56.893 12:27:01 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:56.893 12:27:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:56.893 12:27:01 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:56.893 12:27:01 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:56.893 12:27:01 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:57.152 Malloc0 00:06:57.152 12:27:02 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:57.410 Malloc1 00:06:57.410 12:27:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:57.410 12:27:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.410 12:27:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:57.410 12:27:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:57.410 12:27:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:57.410 12:27:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:57.410 12:27:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:57.410 12:27:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.410 12:27:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:57.410 12:27:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:57.410 12:27:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:57.410 12:27:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:57.410 12:27:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:57.410 12:27:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:57.410 12:27:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:57.410 12:27:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:57.670 /dev/nbd0 00:06:57.670 12:27:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:57.670 12:27:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:57.670 12:27:02 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:57.670 12:27:02 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:57.670 12:27:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:57.670 12:27:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:57.670 12:27:02 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:57.670 12:27:02 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:57.670 12:27:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:57.670 12:27:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:57.670 12:27:02 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:57.670 1+0 records in 00:06:57.670 1+0 records out 00:06:57.670 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000303443 s, 13.5 MB/s 00:06:57.670 12:27:02 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:57.670 12:27:02 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:57.670 12:27:02 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:57.670 12:27:02 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:57.670 12:27:02 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:57.670 12:27:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:57.670 12:27:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:57.670 12:27:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:57.929 /dev/nbd1 00:06:57.929 12:27:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:57.929 12:27:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:57.929 12:27:03 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:57.929 12:27:03 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:57.929 12:27:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:57.930 12:27:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:57.930 12:27:03 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:57.930 12:27:03 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:57.930 12:27:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:57.930 12:27:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:57.930 12:27:03 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:57.930 1+0 records in 00:06:57.930 1+0 records out 00:06:57.930 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000308775 s, 13.3 MB/s 00:06:57.930 12:27:03 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:57.930 12:27:03 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:57.930 12:27:03 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:57.930 12:27:03 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:57.930 12:27:03 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:57.930 12:27:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:57.930 12:27:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:57.930 12:27:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:57.930 12:27:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.930 12:27:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:58.497 12:27:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:58.497 { 00:06:58.497 "nbd_device": "/dev/nbd0", 00:06:58.497 "bdev_name": "Malloc0" 00:06:58.497 }, 00:06:58.497 { 00:06:58.497 "nbd_device": "/dev/nbd1", 00:06:58.497 "bdev_name": "Malloc1" 00:06:58.497 } 00:06:58.497 ]' 00:06:58.497 12:27:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:58.497 { 00:06:58.497 "nbd_device": "/dev/nbd0", 00:06:58.497 "bdev_name": "Malloc0" 00:06:58.497 }, 00:06:58.497 { 00:06:58.497 "nbd_device": "/dev/nbd1", 00:06:58.497 "bdev_name": "Malloc1" 00:06:58.497 } 00:06:58.497 ]' 00:06:58.497 12:27:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:58.497 12:27:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:58.497 /dev/nbd1' 00:06:58.497 12:27:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:58.497 /dev/nbd1' 00:06:58.497 12:27:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:58.497 12:27:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:58.497 12:27:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:58.497 12:27:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:58.497 12:27:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:58.497 12:27:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:58.497 12:27:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.497 12:27:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:58.497 12:27:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:58.497 12:27:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:58.497 12:27:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:58.497 12:27:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:58.497 256+0 records in 00:06:58.497 256+0 records out 00:06:58.497 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104869 s, 100 MB/s 00:06:58.497 12:27:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:58.497 12:27:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:58.497 256+0 records in 00:06:58.497 256+0 records out 00:06:58.497 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0211767 s, 49.5 MB/s 00:06:58.497 12:27:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:58.497 12:27:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:58.497 256+0 records in 00:06:58.497 256+0 records out 00:06:58.497 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239671 s, 43.8 MB/s 00:06:58.497 12:27:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:58.497 12:27:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.497 12:27:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:58.497 12:27:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:58.497 12:27:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:58.497 12:27:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:58.497 12:27:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:58.497 12:27:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:58.497 12:27:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:58.498 12:27:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:58.498 12:27:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:58.498 12:27:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:58.498 12:27:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:58.498 12:27:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.498 12:27:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.498 12:27:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:58.498 12:27:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:58.498 12:27:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.498 12:27:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:58.756 12:27:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:58.756 12:27:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:58.756 12:27:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:58.756 12:27:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.756 12:27:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.756 12:27:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:58.756 12:27:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:58.756 12:27:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.756 12:27:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.756 12:27:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:59.016 12:27:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:59.016 12:27:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:59.016 12:27:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:59.016 12:27:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.016 12:27:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.016 12:27:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:59.016 12:27:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:59.016 12:27:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.016 12:27:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:59.016 12:27:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.016 12:27:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:59.275 12:27:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:59.275 12:27:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:59.275 12:27:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:59.275 12:27:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:59.275 12:27:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:59.275 12:27:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:59.275 12:27:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:59.275 12:27:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:59.275 12:27:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:59.275 12:27:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:59.275 12:27:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:59.275 12:27:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:59.275 12:27:04 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:59.842 12:27:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:59.842 [2024-11-19 12:27:04.920365] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:59.842 [2024-11-19 12:27:04.952736] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.842 [2024-11-19 12:27:04.952746] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.842 [2024-11-19 12:27:04.980291] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:59.842 [2024-11-19 12:27:04.980398] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:59.843 [2024-11-19 12:27:04.980411] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:03.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:03.129 12:27:07 event.app_repeat -- event/event.sh@38 -- # waitforlisten 71374 /var/tmp/spdk-nbd.sock 00:07:03.129 12:27:07 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71374 ']' 00:07:03.129 12:27:07 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:03.129 12:27:07 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:03.129 12:27:07 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:03.129 12:27:07 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:03.129 12:27:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:03.129 12:27:08 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:03.129 12:27:08 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:03.129 12:27:08 event.app_repeat -- event/event.sh@39 -- # killprocess 71374 00:07:03.129 12:27:08 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 71374 ']' 00:07:03.129 12:27:08 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 71374 00:07:03.129 12:27:08 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:07:03.129 12:27:08 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:03.129 12:27:08 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71374 00:07:03.129 killing process with pid 71374 00:07:03.129 12:27:08 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:03.129 12:27:08 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:03.129 12:27:08 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71374' 00:07:03.129 12:27:08 event.app_repeat -- common/autotest_common.sh@969 -- # kill 71374 00:07:03.129 12:27:08 event.app_repeat -- common/autotest_common.sh@974 -- # wait 71374 00:07:03.129 spdk_app_start is called in Round 0. 00:07:03.129 Shutdown signal received, stop current app iteration 00:07:03.129 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:07:03.129 spdk_app_start is called in Round 1. 00:07:03.129 Shutdown signal received, stop current app iteration 00:07:03.129 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:07:03.129 spdk_app_start is called in Round 2. 00:07:03.129 Shutdown signal received, stop current app iteration 00:07:03.129 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:07:03.129 spdk_app_start is called in Round 3. 00:07:03.129 Shutdown signal received, stop current app iteration 00:07:03.129 12:27:08 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:03.129 12:27:08 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:03.129 00:07:03.129 real 0m18.809s 00:07:03.129 user 0m43.263s 00:07:03.129 sys 0m2.588s 00:07:03.129 ************************************ 00:07:03.129 END TEST app_repeat 00:07:03.129 ************************************ 00:07:03.129 12:27:08 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.129 12:27:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:03.129 12:27:08 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:03.129 12:27:08 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:03.129 12:27:08 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:03.129 12:27:08 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.129 12:27:08 event -- common/autotest_common.sh@10 -- # set +x 00:07:03.129 ************************************ 00:07:03.129 START TEST cpu_locks 00:07:03.129 ************************************ 00:07:03.129 12:27:08 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:03.129 * Looking for test storage... 00:07:03.129 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:03.129 12:27:08 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:03.390 12:27:08 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:03.390 12:27:08 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:07:03.390 12:27:08 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:03.390 12:27:08 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:03.390 12:27:08 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:03.390 12:27:08 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:03.390 12:27:08 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:03.390 12:27:08 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:03.390 12:27:08 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:03.390 12:27:08 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:03.390 12:27:08 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:03.390 12:27:08 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:03.390 12:27:08 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:03.390 12:27:08 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:03.390 12:27:08 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:03.390 12:27:08 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:03.390 12:27:08 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:03.390 12:27:08 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:03.390 12:27:08 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:03.390 12:27:08 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:03.390 12:27:08 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:03.390 12:27:08 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:03.390 12:27:08 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:03.390 12:27:08 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:03.390 12:27:08 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:03.390 12:27:08 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:03.390 12:27:08 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:03.390 12:27:08 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:03.390 12:27:08 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:03.390 12:27:08 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:03.390 12:27:08 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:03.390 12:27:08 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:03.390 12:27:08 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:03.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.390 --rc genhtml_branch_coverage=1 00:07:03.390 --rc genhtml_function_coverage=1 00:07:03.390 --rc genhtml_legend=1 00:07:03.390 --rc geninfo_all_blocks=1 00:07:03.390 --rc geninfo_unexecuted_blocks=1 00:07:03.390 00:07:03.390 ' 00:07:03.390 12:27:08 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:03.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.390 --rc genhtml_branch_coverage=1 00:07:03.390 --rc genhtml_function_coverage=1 00:07:03.390 --rc genhtml_legend=1 00:07:03.390 --rc geninfo_all_blocks=1 00:07:03.390 --rc geninfo_unexecuted_blocks=1 00:07:03.390 00:07:03.390 ' 00:07:03.390 12:27:08 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:03.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.390 --rc genhtml_branch_coverage=1 00:07:03.390 --rc genhtml_function_coverage=1 00:07:03.390 --rc genhtml_legend=1 00:07:03.390 --rc geninfo_all_blocks=1 00:07:03.390 --rc geninfo_unexecuted_blocks=1 00:07:03.390 00:07:03.390 ' 00:07:03.390 12:27:08 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:03.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.390 --rc genhtml_branch_coverage=1 00:07:03.390 --rc genhtml_function_coverage=1 00:07:03.390 --rc genhtml_legend=1 00:07:03.390 --rc geninfo_all_blocks=1 00:07:03.390 --rc geninfo_unexecuted_blocks=1 00:07:03.390 00:07:03.390 ' 00:07:03.390 12:27:08 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:03.390 12:27:08 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:03.390 12:27:08 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:03.390 12:27:08 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:03.390 12:27:08 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:03.390 12:27:08 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.390 12:27:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:03.390 ************************************ 00:07:03.390 START TEST default_locks 00:07:03.390 ************************************ 00:07:03.390 12:27:08 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:07:03.390 12:27:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=71820 00:07:03.390 12:27:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 71820 00:07:03.390 12:27:08 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 71820 ']' 00:07:03.390 12:27:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:03.390 12:27:08 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.390 12:27:08 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:03.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.390 12:27:08 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.390 12:27:08 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:03.390 12:27:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:03.390 [2024-11-19 12:27:08.552805] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:03.390 [2024-11-19 12:27:08.552879] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71820 ] 00:07:03.650 [2024-11-19 12:27:08.681632] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.650 [2024-11-19 12:27:08.716474] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.650 [2024-11-19 12:27:08.753436] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:04.217 12:27:09 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:04.217 12:27:09 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:07:04.217 12:27:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 71820 00:07:04.217 12:27:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:04.217 12:27:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 71820 00:07:04.786 12:27:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 71820 00:07:04.786 12:27:09 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 71820 ']' 00:07:04.786 12:27:09 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 71820 00:07:04.786 12:27:09 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:07:04.786 12:27:09 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:04.786 12:27:09 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71820 00:07:04.786 12:27:09 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:04.786 12:27:09 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:04.786 killing process with pid 71820 00:07:04.786 12:27:09 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71820' 00:07:04.786 12:27:09 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 71820 00:07:04.786 12:27:09 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 71820 00:07:05.045 12:27:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 71820 00:07:05.045 12:27:10 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:05.046 12:27:10 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71820 00:07:05.046 12:27:10 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:05.046 12:27:10 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.046 12:27:10 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:05.046 12:27:10 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.046 12:27:10 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 71820 00:07:05.046 12:27:10 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 71820 ']' 00:07:05.046 12:27:10 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.046 12:27:10 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:05.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.046 12:27:10 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.046 12:27:10 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:05.046 12:27:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:05.046 ERROR: process (pid: 71820) is no longer running 00:07:05.046 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71820) - No such process 00:07:05.046 12:27:10 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.046 12:27:10 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:07:05.046 12:27:10 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:05.046 12:27:10 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:05.046 12:27:10 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:05.046 12:27:10 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:05.046 12:27:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:05.046 12:27:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:05.046 12:27:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:05.046 12:27:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:05.046 00:07:05.046 real 0m1.688s 00:07:05.046 user 0m1.883s 00:07:05.046 sys 0m0.470s 00:07:05.046 12:27:10 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.046 12:27:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:05.046 ************************************ 00:07:05.046 END TEST default_locks 00:07:05.046 ************************************ 00:07:05.046 12:27:10 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:05.046 12:27:10 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:05.046 12:27:10 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.046 12:27:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:05.046 ************************************ 00:07:05.046 START TEST default_locks_via_rpc 00:07:05.046 ************************************ 00:07:05.046 12:27:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:07:05.046 12:27:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=71869 00:07:05.046 12:27:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 71869 00:07:05.046 12:27:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71869 ']' 00:07:05.046 12:27:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.046 12:27:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:05.046 12:27:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:05.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.046 12:27:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.046 12:27:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:05.046 12:27:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.306 [2024-11-19 12:27:10.303198] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:05.306 [2024-11-19 12:27:10.303320] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71869 ] 00:07:05.306 [2024-11-19 12:27:10.435348] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.306 [2024-11-19 12:27:10.469243] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.306 [2024-11-19 12:27:10.503836] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:05.565 12:27:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.565 12:27:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:05.565 12:27:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:05.565 12:27:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.565 12:27:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.565 12:27:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.565 12:27:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:05.565 12:27:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:05.565 12:27:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:05.565 12:27:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:05.565 12:27:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:05.565 12:27:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.565 12:27:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.565 12:27:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.565 12:27:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 71869 00:07:05.565 12:27:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 71869 00:07:05.565 12:27:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:05.825 12:27:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 71869 00:07:05.825 12:27:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 71869 ']' 00:07:05.825 12:27:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 71869 00:07:05.825 12:27:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:07:05.825 12:27:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:05.825 12:27:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71869 00:07:05.825 12:27:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:05.825 12:27:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:05.825 killing process with pid 71869 00:07:05.825 12:27:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71869' 00:07:05.825 12:27:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 71869 00:07:05.825 12:27:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 71869 00:07:06.085 00:07:06.085 real 0m1.033s 00:07:06.085 user 0m1.085s 00:07:06.085 sys 0m0.392s 00:07:06.085 12:27:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.085 12:27:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.085 ************************************ 00:07:06.085 END TEST default_locks_via_rpc 00:07:06.085 ************************************ 00:07:06.085 12:27:11 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:06.085 12:27:11 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:06.085 12:27:11 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.085 12:27:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.085 ************************************ 00:07:06.085 START TEST non_locking_app_on_locked_coremask 00:07:06.085 ************************************ 00:07:06.085 12:27:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:07:06.085 12:27:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=71911 00:07:06.085 12:27:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 71911 /var/tmp/spdk.sock 00:07:06.085 12:27:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:06.085 12:27:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71911 ']' 00:07:06.085 12:27:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.085 12:27:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:06.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.085 12:27:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.085 12:27:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:06.085 12:27:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.344 [2024-11-19 12:27:11.375152] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:06.344 [2024-11-19 12:27:11.375256] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71911 ] 00:07:06.344 [2024-11-19 12:27:11.504870] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.344 [2024-11-19 12:27:11.537650] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.344 [2024-11-19 12:27:11.573243] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:06.603 12:27:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:06.603 12:27:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:06.603 12:27:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=71914 00:07:06.603 12:27:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 71914 /var/tmp/spdk2.sock 00:07:06.603 12:27:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71914 ']' 00:07:06.603 12:27:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:06.603 12:27:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:06.603 12:27:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:06.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:06.603 12:27:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:06.603 12:27:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.603 12:27:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:06.603 [2024-11-19 12:27:11.739576] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:06.603 [2024-11-19 12:27:11.739691] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71914 ] 00:07:06.863 [2024-11-19 12:27:11.875213] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:06.863 [2024-11-19 12:27:11.875263] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.863 [2024-11-19 12:27:11.946306] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.863 [2024-11-19 12:27:12.019138] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:07.800 12:27:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:07.800 12:27:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:07.800 12:27:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 71911 00:07:07.800 12:27:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71911 00:07:07.800 12:27:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:08.738 12:27:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 71911 00:07:08.738 12:27:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71911 ']' 00:07:08.738 12:27:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71911 00:07:08.738 12:27:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:08.738 12:27:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:08.738 12:27:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71911 00:07:08.738 12:27:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:08.738 12:27:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:08.738 killing process with pid 71911 00:07:08.738 12:27:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71911' 00:07:08.738 12:27:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71911 00:07:08.738 12:27:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71911 00:07:08.997 12:27:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 71914 00:07:08.997 12:27:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71914 ']' 00:07:08.997 12:27:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71914 00:07:08.997 12:27:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:08.997 12:27:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:08.997 12:27:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71914 00:07:08.997 12:27:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:08.997 12:27:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:08.997 12:27:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71914' 00:07:08.997 killing process with pid 71914 00:07:08.997 12:27:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71914 00:07:08.997 12:27:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71914 00:07:09.257 00:07:09.257 real 0m3.080s 00:07:09.257 user 0m3.590s 00:07:09.257 sys 0m0.907s 00:07:09.257 12:27:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:09.257 12:27:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.257 ************************************ 00:07:09.257 END TEST non_locking_app_on_locked_coremask 00:07:09.257 ************************************ 00:07:09.257 12:27:14 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:09.257 12:27:14 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:09.257 12:27:14 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.257 12:27:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:09.257 ************************************ 00:07:09.257 START TEST locking_app_on_unlocked_coremask 00:07:09.257 ************************************ 00:07:09.257 12:27:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:07:09.257 12:27:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=71981 00:07:09.257 12:27:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:09.257 12:27:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 71981 /var/tmp/spdk.sock 00:07:09.257 12:27:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71981 ']' 00:07:09.257 12:27:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.257 12:27:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:09.257 12:27:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.257 12:27:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:09.257 12:27:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.517 [2024-11-19 12:27:14.522692] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:09.517 [2024-11-19 12:27:14.522829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71981 ] 00:07:09.517 [2024-11-19 12:27:14.655809] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:09.517 [2024-11-19 12:27:14.655844] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.517 [2024-11-19 12:27:14.690967] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.517 [2024-11-19 12:27:14.729988] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:10.454 12:27:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:10.454 12:27:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:10.454 12:27:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=71997 00:07:10.454 12:27:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:10.454 12:27:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 71997 /var/tmp/spdk2.sock 00:07:10.454 12:27:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71997 ']' 00:07:10.454 12:27:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:10.454 12:27:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:10.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:10.455 12:27:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:10.455 12:27:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:10.455 12:27:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:10.455 [2024-11-19 12:27:15.560521] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:10.455 [2024-11-19 12:27:15.560625] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71997 ] 00:07:10.455 [2024-11-19 12:27:15.699148] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.714 [2024-11-19 12:27:15.775997] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.714 [2024-11-19 12:27:15.848791] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:11.652 12:27:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:11.652 12:27:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:11.652 12:27:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 71997 00:07:11.652 12:27:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71997 00:07:11.652 12:27:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:12.590 12:27:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 71981 00:07:12.590 12:27:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71981 ']' 00:07:12.590 12:27:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71981 00:07:12.590 12:27:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:12.590 12:27:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:12.590 12:27:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71981 00:07:12.590 12:27:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:12.590 12:27:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:12.590 12:27:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71981' 00:07:12.590 killing process with pid 71981 00:07:12.590 12:27:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71981 00:07:12.590 12:27:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71981 00:07:12.850 12:27:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 71997 00:07:12.850 12:27:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71997 ']' 00:07:12.850 12:27:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71997 00:07:12.850 12:27:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:12.850 12:27:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:12.850 12:27:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71997 00:07:12.850 12:27:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:12.850 killing process with pid 71997 00:07:12.850 12:27:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:12.850 12:27:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71997' 00:07:12.850 12:27:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71997 00:07:12.850 12:27:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71997 00:07:13.109 00:07:13.109 real 0m3.843s 00:07:13.109 user 0m4.577s 00:07:13.109 sys 0m0.978s 00:07:13.109 12:27:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.109 12:27:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.109 ************************************ 00:07:13.109 END TEST locking_app_on_unlocked_coremask 00:07:13.109 ************************************ 00:07:13.109 12:27:18 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:13.109 12:27:18 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:13.109 12:27:18 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.109 12:27:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:13.109 ************************************ 00:07:13.109 START TEST locking_app_on_locked_coremask 00:07:13.109 ************************************ 00:07:13.109 12:27:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:07:13.109 12:27:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=72059 00:07:13.109 12:27:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 72059 /var/tmp/spdk.sock 00:07:13.109 12:27:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:13.109 12:27:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 72059 ']' 00:07:13.109 12:27:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.109 12:27:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:13.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.109 12:27:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.109 12:27:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:13.109 12:27:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.369 [2024-11-19 12:27:18.419060] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:13.369 [2024-11-19 12:27:18.419171] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72059 ] 00:07:13.369 [2024-11-19 12:27:18.557981] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.369 [2024-11-19 12:27:18.594239] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.628 [2024-11-19 12:27:18.633436] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:13.628 12:27:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:13.628 12:27:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:13.628 12:27:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=72067 00:07:13.628 12:27:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:13.628 12:27:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 72067 /var/tmp/spdk2.sock 00:07:13.628 12:27:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:13.628 12:27:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 72067 /var/tmp/spdk2.sock 00:07:13.628 12:27:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:13.628 12:27:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:13.628 12:27:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:13.628 12:27:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:13.628 12:27:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 72067 /var/tmp/spdk2.sock 00:07:13.628 12:27:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 72067 ']' 00:07:13.628 12:27:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:13.628 12:27:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:13.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:13.628 12:27:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:13.628 12:27:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:13.628 12:27:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.628 [2024-11-19 12:27:18.824220] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:13.628 [2024-11-19 12:27:18.824323] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72067 ] 00:07:13.887 [2024-11-19 12:27:18.968024] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 72059 has claimed it. 00:07:13.887 [2024-11-19 12:27:18.968106] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:14.453 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (72067) - No such process 00:07:14.453 ERROR: process (pid: 72067) is no longer running 00:07:14.453 12:27:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:14.453 12:27:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:14.453 12:27:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:14.453 12:27:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:14.453 12:27:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:14.453 12:27:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:14.453 12:27:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 72059 00:07:14.453 12:27:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72059 00:07:14.453 12:27:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:15.020 12:27:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 72059 00:07:15.020 12:27:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 72059 ']' 00:07:15.020 12:27:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 72059 00:07:15.020 12:27:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:15.020 12:27:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:15.020 12:27:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72059 00:07:15.020 12:27:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:15.020 12:27:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:15.020 killing process with pid 72059 00:07:15.020 12:27:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72059' 00:07:15.020 12:27:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 72059 00:07:15.020 12:27:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 72059 00:07:15.020 00:07:15.020 real 0m1.916s 00:07:15.020 user 0m2.287s 00:07:15.020 sys 0m0.530s 00:07:15.020 12:27:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.020 12:27:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:15.020 ************************************ 00:07:15.020 END TEST locking_app_on_locked_coremask 00:07:15.020 ************************************ 00:07:15.279 12:27:20 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:15.279 12:27:20 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:15.279 12:27:20 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.279 12:27:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:15.279 ************************************ 00:07:15.279 START TEST locking_overlapped_coremask 00:07:15.279 ************************************ 00:07:15.279 12:27:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:15.279 12:27:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=72113 00:07:15.279 12:27:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 72113 /var/tmp/spdk.sock 00:07:15.279 12:27:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 72113 ']' 00:07:15.279 12:27:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.279 12:27:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:15.279 12:27:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:15.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.279 12:27:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.279 12:27:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:15.279 12:27:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:15.279 [2024-11-19 12:27:20.381828] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:15.279 [2024-11-19 12:27:20.381942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72113 ] 00:07:15.279 [2024-11-19 12:27:20.519130] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:15.538 [2024-11-19 12:27:20.553894] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.538 [2024-11-19 12:27:20.554026] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.538 [2024-11-19 12:27:20.554028] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.538 [2024-11-19 12:27:20.589113] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:15.538 12:27:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:15.538 12:27:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:15.538 12:27:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=72123 00:07:15.538 12:27:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 72123 /var/tmp/spdk2.sock 00:07:15.538 12:27:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:15.538 12:27:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 72123 /var/tmp/spdk2.sock 00:07:15.538 12:27:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:15.538 12:27:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:15.538 12:27:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.538 12:27:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:15.538 12:27:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.538 12:27:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 72123 /var/tmp/spdk2.sock 00:07:15.538 12:27:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 72123 ']' 00:07:15.538 12:27:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:15.538 12:27:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:15.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:15.538 12:27:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:15.538 12:27:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:15.538 12:27:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:15.538 [2024-11-19 12:27:20.779601] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:15.538 [2024-11-19 12:27:20.779717] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72123 ] 00:07:15.796 [2024-11-19 12:27:20.924381] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72113 has claimed it. 00:07:15.796 [2024-11-19 12:27:20.924445] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:16.364 ERROR: process (pid: 72123) is no longer running 00:07:16.364 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (72123) - No such process 00:07:16.364 12:27:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:16.364 12:27:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:16.364 12:27:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:16.364 12:27:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:16.364 12:27:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:16.364 12:27:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:16.364 12:27:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:16.364 12:27:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:16.364 12:27:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:16.364 12:27:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:16.364 12:27:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 72113 00:07:16.364 12:27:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 72113 ']' 00:07:16.364 12:27:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 72113 00:07:16.364 12:27:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:16.364 12:27:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:16.364 12:27:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72113 00:07:16.364 12:27:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:16.364 12:27:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:16.364 killing process with pid 72113 00:07:16.364 12:27:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72113' 00:07:16.364 12:27:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 72113 00:07:16.364 12:27:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 72113 00:07:16.624 00:07:16.624 real 0m1.471s 00:07:16.624 user 0m4.093s 00:07:16.624 sys 0m0.299s 00:07:16.624 12:27:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.624 12:27:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:16.624 ************************************ 00:07:16.624 END TEST locking_overlapped_coremask 00:07:16.624 ************************************ 00:07:16.624 12:27:21 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:16.624 12:27:21 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:16.624 12:27:21 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:16.624 12:27:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:16.624 ************************************ 00:07:16.624 START TEST locking_overlapped_coremask_via_rpc 00:07:16.624 ************************************ 00:07:16.624 12:27:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:16.624 12:27:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=72163 00:07:16.624 12:27:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 72163 /var/tmp/spdk.sock 00:07:16.624 12:27:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:16.624 12:27:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72163 ']' 00:07:16.624 12:27:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.624 12:27:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:16.624 12:27:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.624 12:27:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:16.624 12:27:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.883 [2024-11-19 12:27:21.905365] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:16.883 [2024-11-19 12:27:21.905471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72163 ] 00:07:16.883 [2024-11-19 12:27:22.045032] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:16.883 [2024-11-19 12:27:22.045079] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:16.883 [2024-11-19 12:27:22.079726] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.883 [2024-11-19 12:27:22.079858] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.883 [2024-11-19 12:27:22.079877] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.883 [2024-11-19 12:27:22.115867] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:17.142 12:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:17.142 12:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:17.142 12:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=72168 00:07:17.142 12:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:17.142 12:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 72168 /var/tmp/spdk2.sock 00:07:17.142 12:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72168 ']' 00:07:17.142 12:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:17.142 12:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:17.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:17.142 12:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:17.142 12:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:17.142 12:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.142 [2024-11-19 12:27:22.302953] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:17.142 [2024-11-19 12:27:22.303060] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72168 ] 00:07:17.401 [2024-11-19 12:27:22.449163] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:17.401 [2024-11-19 12:27:22.449215] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:17.401 [2024-11-19 12:27:22.519451] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:17.401 [2024-11-19 12:27:22.522748] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:07:17.401 [2024-11-19 12:27:22.522751] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:17.401 [2024-11-19 12:27:22.587617] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:18.340 12:27:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:18.340 12:27:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:18.340 12:27:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:18.340 12:27:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.340 12:27:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.340 12:27:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.340 12:27:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:18.340 12:27:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:18.340 12:27:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:18.340 12:27:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:18.340 12:27:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:18.340 12:27:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:18.340 12:27:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:18.340 12:27:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:18.340 12:27:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.340 12:27:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.340 [2024-11-19 12:27:23.335865] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72163 has claimed it. 00:07:18.340 request: 00:07:18.340 { 00:07:18.340 "method": "framework_enable_cpumask_locks", 00:07:18.340 "req_id": 1 00:07:18.340 } 00:07:18.340 Got JSON-RPC error response 00:07:18.340 response: 00:07:18.340 { 00:07:18.340 "code": -32603, 00:07:18.340 "message": "Failed to claim CPU core: 2" 00:07:18.340 } 00:07:18.340 12:27:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:18.340 12:27:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:18.340 12:27:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:18.340 12:27:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:18.340 12:27:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:18.340 12:27:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 72163 /var/tmp/spdk.sock 00:07:18.340 12:27:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72163 ']' 00:07:18.340 12:27:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.340 12:27:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:18.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.340 12:27:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.340 12:27:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:18.340 12:27:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.599 12:27:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:18.599 12:27:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:18.599 12:27:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 72168 /var/tmp/spdk2.sock 00:07:18.599 12:27:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72168 ']' 00:07:18.599 12:27:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:18.599 12:27:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:18.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:18.599 12:27:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:18.599 12:27:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:18.599 12:27:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.859 12:27:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:18.859 12:27:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:18.859 12:27:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:18.859 12:27:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:18.859 12:27:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:18.859 12:27:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:18.859 00:07:18.859 real 0m2.083s 00:07:18.859 user 0m1.266s 00:07:18.859 sys 0m0.167s 00:07:18.859 12:27:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:18.859 12:27:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.859 ************************************ 00:07:18.859 END TEST locking_overlapped_coremask_via_rpc 00:07:18.859 ************************************ 00:07:18.859 12:27:23 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:18.859 12:27:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72163 ]] 00:07:18.859 12:27:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72163 00:07:18.859 12:27:23 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 72163 ']' 00:07:18.859 12:27:23 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 72163 00:07:18.859 12:27:23 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:18.859 12:27:23 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:18.859 12:27:23 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72163 00:07:18.859 12:27:23 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:18.859 12:27:23 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:18.859 12:27:23 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72163' 00:07:18.859 killing process with pid 72163 00:07:18.859 12:27:23 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 72163 00:07:18.859 12:27:23 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 72163 00:07:19.119 12:27:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72168 ]] 00:07:19.119 12:27:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72168 00:07:19.119 12:27:24 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 72168 ']' 00:07:19.119 12:27:24 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 72168 00:07:19.119 12:27:24 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:19.119 12:27:24 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:19.119 12:27:24 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72168 00:07:19.119 12:27:24 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:19.119 12:27:24 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:19.119 killing process with pid 72168 00:07:19.119 12:27:24 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72168' 00:07:19.119 12:27:24 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 72168 00:07:19.119 12:27:24 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 72168 00:07:19.381 12:27:24 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:19.381 12:27:24 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:19.381 12:27:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72163 ]] 00:07:19.381 12:27:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72163 00:07:19.381 12:27:24 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 72163 ']' 00:07:19.381 12:27:24 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 72163 00:07:19.381 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (72163) - No such process 00:07:19.381 12:27:24 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 72163 is not found' 00:07:19.381 Process with pid 72163 is not found 00:07:19.381 12:27:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72168 ]] 00:07:19.381 12:27:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72168 00:07:19.381 12:27:24 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 72168 ']' 00:07:19.381 12:27:24 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 72168 00:07:19.381 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (72168) - No such process 00:07:19.381 Process with pid 72168 is not found 00:07:19.381 12:27:24 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 72168 is not found' 00:07:19.381 12:27:24 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:19.381 00:07:19.381 real 0m16.186s 00:07:19.381 user 0m29.242s 00:07:19.381 sys 0m4.401s 00:07:19.381 12:27:24 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:19.381 12:27:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:19.381 ************************************ 00:07:19.381 END TEST cpu_locks 00:07:19.381 ************************************ 00:07:19.381 00:07:19.381 real 0m43.742s 00:07:19.381 user 1m26.879s 00:07:19.381 sys 0m7.665s 00:07:19.381 12:27:24 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:19.381 12:27:24 event -- common/autotest_common.sh@10 -- # set +x 00:07:19.381 ************************************ 00:07:19.381 END TEST event 00:07:19.381 ************************************ 00:07:19.381 12:27:24 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:19.381 12:27:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:19.381 12:27:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:19.381 12:27:24 -- common/autotest_common.sh@10 -- # set +x 00:07:19.381 ************************************ 00:07:19.381 START TEST thread 00:07:19.381 ************************************ 00:07:19.381 12:27:24 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:19.641 * Looking for test storage... 00:07:19.641 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:19.641 12:27:24 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:19.641 12:27:24 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:07:19.641 12:27:24 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:19.641 12:27:24 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:19.641 12:27:24 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:19.641 12:27:24 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:19.641 12:27:24 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:19.641 12:27:24 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:19.641 12:27:24 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:19.641 12:27:24 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:19.641 12:27:24 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:19.641 12:27:24 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:19.641 12:27:24 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:19.641 12:27:24 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:19.641 12:27:24 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:19.641 12:27:24 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:19.641 12:27:24 thread -- scripts/common.sh@345 -- # : 1 00:07:19.641 12:27:24 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:19.641 12:27:24 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:19.641 12:27:24 thread -- scripts/common.sh@365 -- # decimal 1 00:07:19.641 12:27:24 thread -- scripts/common.sh@353 -- # local d=1 00:07:19.641 12:27:24 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:19.641 12:27:24 thread -- scripts/common.sh@355 -- # echo 1 00:07:19.641 12:27:24 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:19.641 12:27:24 thread -- scripts/common.sh@366 -- # decimal 2 00:07:19.641 12:27:24 thread -- scripts/common.sh@353 -- # local d=2 00:07:19.641 12:27:24 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:19.641 12:27:24 thread -- scripts/common.sh@355 -- # echo 2 00:07:19.641 12:27:24 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:19.641 12:27:24 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:19.641 12:27:24 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:19.641 12:27:24 thread -- scripts/common.sh@368 -- # return 0 00:07:19.641 12:27:24 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:19.641 12:27:24 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:19.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.641 --rc genhtml_branch_coverage=1 00:07:19.641 --rc genhtml_function_coverage=1 00:07:19.641 --rc genhtml_legend=1 00:07:19.641 --rc geninfo_all_blocks=1 00:07:19.641 --rc geninfo_unexecuted_blocks=1 00:07:19.641 00:07:19.641 ' 00:07:19.641 12:27:24 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:19.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.641 --rc genhtml_branch_coverage=1 00:07:19.641 --rc genhtml_function_coverage=1 00:07:19.641 --rc genhtml_legend=1 00:07:19.641 --rc geninfo_all_blocks=1 00:07:19.641 --rc geninfo_unexecuted_blocks=1 00:07:19.641 00:07:19.641 ' 00:07:19.641 12:27:24 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:19.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.641 --rc genhtml_branch_coverage=1 00:07:19.641 --rc genhtml_function_coverage=1 00:07:19.641 --rc genhtml_legend=1 00:07:19.641 --rc geninfo_all_blocks=1 00:07:19.641 --rc geninfo_unexecuted_blocks=1 00:07:19.641 00:07:19.641 ' 00:07:19.641 12:27:24 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:19.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.641 --rc genhtml_branch_coverage=1 00:07:19.641 --rc genhtml_function_coverage=1 00:07:19.641 --rc genhtml_legend=1 00:07:19.641 --rc geninfo_all_blocks=1 00:07:19.641 --rc geninfo_unexecuted_blocks=1 00:07:19.641 00:07:19.641 ' 00:07:19.641 12:27:24 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:19.641 12:27:24 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:19.641 12:27:24 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:19.641 12:27:24 thread -- common/autotest_common.sh@10 -- # set +x 00:07:19.641 ************************************ 00:07:19.641 START TEST thread_poller_perf 00:07:19.641 ************************************ 00:07:19.641 12:27:24 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:19.641 [2024-11-19 12:27:24.779960] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:19.641 [2024-11-19 12:27:24.780069] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72299 ] 00:07:19.900 [2024-11-19 12:27:24.919568] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.900 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:19.900 [2024-11-19 12:27:24.957628] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.838 [2024-11-19T12:27:26.098Z] ====================================== 00:07:20.838 [2024-11-19T12:27:26.098Z] busy:2207970016 (cyc) 00:07:20.838 [2024-11-19T12:27:26.098Z] total_run_count: 378000 00:07:20.838 [2024-11-19T12:27:26.098Z] tsc_hz: 2200000000 (cyc) 00:07:20.838 [2024-11-19T12:27:26.098Z] ====================================== 00:07:20.838 [2024-11-19T12:27:26.098Z] poller_cost: 5841 (cyc), 2655 (nsec) 00:07:20.838 00:07:20.838 real 0m1.256s 00:07:20.838 user 0m1.115s 00:07:20.838 sys 0m0.035s 00:07:20.838 ************************************ 00:07:20.838 END TEST thread_poller_perf 00:07:20.838 ************************************ 00:07:20.838 12:27:26 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.838 12:27:26 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:20.838 12:27:26 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:20.838 12:27:26 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:20.838 12:27:26 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.838 12:27:26 thread -- common/autotest_common.sh@10 -- # set +x 00:07:20.838 ************************************ 00:07:20.838 START TEST thread_poller_perf 00:07:20.838 ************************************ 00:07:20.838 12:27:26 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:20.838 [2024-11-19 12:27:26.088802] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:20.838 [2024-11-19 12:27:26.088894] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72334 ] 00:07:21.097 [2024-11-19 12:27:26.226328] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.097 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:21.097 [2024-11-19 12:27:26.257316] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.475 [2024-11-19T12:27:27.735Z] ====================================== 00:07:22.475 [2024-11-19T12:27:27.735Z] busy:2201802314 (cyc) 00:07:22.475 [2024-11-19T12:27:27.735Z] total_run_count: 4977000 00:07:22.475 [2024-11-19T12:27:27.735Z] tsc_hz: 2200000000 (cyc) 00:07:22.475 [2024-11-19T12:27:27.735Z] ====================================== 00:07:22.475 [2024-11-19T12:27:27.735Z] poller_cost: 442 (cyc), 200 (nsec) 00:07:22.475 00:07:22.475 real 0m1.239s 00:07:22.475 user 0m1.095s 00:07:22.475 sys 0m0.038s 00:07:22.475 ************************************ 00:07:22.475 END TEST thread_poller_perf 00:07:22.475 ************************************ 00:07:22.475 12:27:27 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.475 12:27:27 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:22.475 12:27:27 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:22.475 00:07:22.475 real 0m2.779s 00:07:22.475 user 0m2.357s 00:07:22.475 sys 0m0.212s 00:07:22.475 12:27:27 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.475 12:27:27 thread -- common/autotest_common.sh@10 -- # set +x 00:07:22.475 ************************************ 00:07:22.475 END TEST thread 00:07:22.475 ************************************ 00:07:22.475 12:27:27 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:22.475 12:27:27 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:22.475 12:27:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:22.475 12:27:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.475 12:27:27 -- common/autotest_common.sh@10 -- # set +x 00:07:22.475 ************************************ 00:07:22.475 START TEST app_cmdline 00:07:22.475 ************************************ 00:07:22.475 12:27:27 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:22.475 * Looking for test storage... 00:07:22.475 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:22.475 12:27:27 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:22.475 12:27:27 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:07:22.475 12:27:27 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:22.475 12:27:27 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:22.475 12:27:27 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:22.475 12:27:27 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:22.475 12:27:27 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:22.475 12:27:27 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:22.475 12:27:27 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:22.475 12:27:27 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:22.475 12:27:27 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:22.475 12:27:27 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:22.475 12:27:27 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:22.475 12:27:27 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:22.475 12:27:27 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:22.475 12:27:27 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:22.475 12:27:27 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:22.475 12:27:27 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:22.475 12:27:27 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:22.475 12:27:27 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:22.475 12:27:27 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:22.475 12:27:27 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:22.475 12:27:27 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:22.475 12:27:27 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:22.475 12:27:27 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:22.475 12:27:27 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:22.475 12:27:27 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:22.475 12:27:27 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:22.476 12:27:27 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:22.476 12:27:27 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:22.476 12:27:27 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:22.476 12:27:27 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:22.476 12:27:27 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:22.476 12:27:27 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:22.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.476 --rc genhtml_branch_coverage=1 00:07:22.476 --rc genhtml_function_coverage=1 00:07:22.476 --rc genhtml_legend=1 00:07:22.476 --rc geninfo_all_blocks=1 00:07:22.476 --rc geninfo_unexecuted_blocks=1 00:07:22.476 00:07:22.476 ' 00:07:22.476 12:27:27 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:22.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.476 --rc genhtml_branch_coverage=1 00:07:22.476 --rc genhtml_function_coverage=1 00:07:22.476 --rc genhtml_legend=1 00:07:22.476 --rc geninfo_all_blocks=1 00:07:22.476 --rc geninfo_unexecuted_blocks=1 00:07:22.476 00:07:22.476 ' 00:07:22.476 12:27:27 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:22.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.476 --rc genhtml_branch_coverage=1 00:07:22.476 --rc genhtml_function_coverage=1 00:07:22.476 --rc genhtml_legend=1 00:07:22.476 --rc geninfo_all_blocks=1 00:07:22.476 --rc geninfo_unexecuted_blocks=1 00:07:22.476 00:07:22.476 ' 00:07:22.476 12:27:27 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:22.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.476 --rc genhtml_branch_coverage=1 00:07:22.476 --rc genhtml_function_coverage=1 00:07:22.476 --rc genhtml_legend=1 00:07:22.476 --rc geninfo_all_blocks=1 00:07:22.476 --rc geninfo_unexecuted_blocks=1 00:07:22.476 00:07:22.476 ' 00:07:22.476 12:27:27 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:22.476 12:27:27 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=72417 00:07:22.476 12:27:27 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 72417 00:07:22.476 12:27:27 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 72417 ']' 00:07:22.476 12:27:27 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.476 12:27:27 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:22.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.476 12:27:27 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.476 12:27:27 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:22.476 12:27:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:22.476 12:27:27 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:22.476 [2024-11-19 12:27:27.648572] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:22.476 [2024-11-19 12:27:27.648689] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72417 ] 00:07:22.735 [2024-11-19 12:27:27.787589] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.735 [2024-11-19 12:27:27.820227] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.735 [2024-11-19 12:27:27.853991] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:22.735 12:27:27 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:22.735 12:27:27 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:22.735 12:27:27 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:22.994 { 00:07:22.994 "version": "SPDK v24.09.1-pre git sha1 b18e1bd62", 00:07:22.994 "fields": { 00:07:22.994 "major": 24, 00:07:22.994 "minor": 9, 00:07:22.994 "patch": 1, 00:07:22.994 "suffix": "-pre", 00:07:22.994 "commit": "b18e1bd62" 00:07:22.994 } 00:07:22.994 } 00:07:22.994 12:27:28 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:22.994 12:27:28 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:22.994 12:27:28 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:22.994 12:27:28 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:22.994 12:27:28 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:22.994 12:27:28 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:22.994 12:27:28 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.994 12:27:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:22.994 12:27:28 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:22.994 12:27:28 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.994 12:27:28 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:22.994 12:27:28 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:22.994 12:27:28 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:22.994 12:27:28 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:22.994 12:27:28 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:22.994 12:27:28 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:22.994 12:27:28 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:22.994 12:27:28 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:22.994 12:27:28 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:22.994 12:27:28 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:22.994 12:27:28 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:22.994 12:27:28 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:22.994 12:27:28 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:22.994 12:27:28 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:23.564 request: 00:07:23.564 { 00:07:23.564 "method": "env_dpdk_get_mem_stats", 00:07:23.564 "req_id": 1 00:07:23.564 } 00:07:23.564 Got JSON-RPC error response 00:07:23.564 response: 00:07:23.564 { 00:07:23.564 "code": -32601, 00:07:23.564 "message": "Method not found" 00:07:23.564 } 00:07:23.564 12:27:28 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:23.564 12:27:28 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:23.564 12:27:28 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:23.564 12:27:28 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:23.564 12:27:28 app_cmdline -- app/cmdline.sh@1 -- # killprocess 72417 00:07:23.564 12:27:28 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 72417 ']' 00:07:23.564 12:27:28 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 72417 00:07:23.564 12:27:28 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:23.564 12:27:28 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:23.564 12:27:28 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72417 00:07:23.564 12:27:28 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:23.564 12:27:28 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:23.564 killing process with pid 72417 00:07:23.564 12:27:28 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72417' 00:07:23.564 12:27:28 app_cmdline -- common/autotest_common.sh@969 -- # kill 72417 00:07:23.564 12:27:28 app_cmdline -- common/autotest_common.sh@974 -- # wait 72417 00:07:23.564 00:07:23.564 real 0m1.378s 00:07:23.564 user 0m1.812s 00:07:23.564 sys 0m0.334s 00:07:23.564 12:27:28 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.564 12:27:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:23.564 ************************************ 00:07:23.564 END TEST app_cmdline 00:07:23.564 ************************************ 00:07:23.824 12:27:28 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:23.824 12:27:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:23.824 12:27:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.824 12:27:28 -- common/autotest_common.sh@10 -- # set +x 00:07:23.824 ************************************ 00:07:23.824 START TEST version 00:07:23.824 ************************************ 00:07:23.824 12:27:28 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:23.824 * Looking for test storage... 00:07:23.824 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:23.824 12:27:28 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:23.824 12:27:28 version -- common/autotest_common.sh@1681 -- # lcov --version 00:07:23.824 12:27:28 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:23.824 12:27:28 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:23.824 12:27:28 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:23.824 12:27:28 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:23.824 12:27:28 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:23.824 12:27:28 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:23.824 12:27:28 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:23.824 12:27:28 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:23.824 12:27:28 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:23.824 12:27:28 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:23.824 12:27:28 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:23.824 12:27:28 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:23.824 12:27:28 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:23.824 12:27:28 version -- scripts/common.sh@344 -- # case "$op" in 00:07:23.824 12:27:28 version -- scripts/common.sh@345 -- # : 1 00:07:23.824 12:27:28 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:23.824 12:27:28 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:23.824 12:27:28 version -- scripts/common.sh@365 -- # decimal 1 00:07:23.824 12:27:28 version -- scripts/common.sh@353 -- # local d=1 00:07:23.824 12:27:28 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:23.824 12:27:28 version -- scripts/common.sh@355 -- # echo 1 00:07:23.824 12:27:28 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:23.824 12:27:28 version -- scripts/common.sh@366 -- # decimal 2 00:07:23.824 12:27:28 version -- scripts/common.sh@353 -- # local d=2 00:07:23.824 12:27:28 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:23.824 12:27:28 version -- scripts/common.sh@355 -- # echo 2 00:07:23.824 12:27:28 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:23.824 12:27:28 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:23.824 12:27:28 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:23.824 12:27:28 version -- scripts/common.sh@368 -- # return 0 00:07:23.824 12:27:28 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:23.824 12:27:28 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:23.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.824 --rc genhtml_branch_coverage=1 00:07:23.824 --rc genhtml_function_coverage=1 00:07:23.824 --rc genhtml_legend=1 00:07:23.824 --rc geninfo_all_blocks=1 00:07:23.824 --rc geninfo_unexecuted_blocks=1 00:07:23.824 00:07:23.824 ' 00:07:23.824 12:27:28 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:23.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.824 --rc genhtml_branch_coverage=1 00:07:23.824 --rc genhtml_function_coverage=1 00:07:23.824 --rc genhtml_legend=1 00:07:23.824 --rc geninfo_all_blocks=1 00:07:23.824 --rc geninfo_unexecuted_blocks=1 00:07:23.824 00:07:23.824 ' 00:07:23.824 12:27:28 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:23.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.824 --rc genhtml_branch_coverage=1 00:07:23.824 --rc genhtml_function_coverage=1 00:07:23.824 --rc genhtml_legend=1 00:07:23.824 --rc geninfo_all_blocks=1 00:07:23.824 --rc geninfo_unexecuted_blocks=1 00:07:23.824 00:07:23.824 ' 00:07:23.824 12:27:28 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:23.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.824 --rc genhtml_branch_coverage=1 00:07:23.824 --rc genhtml_function_coverage=1 00:07:23.824 --rc genhtml_legend=1 00:07:23.824 --rc geninfo_all_blocks=1 00:07:23.824 --rc geninfo_unexecuted_blocks=1 00:07:23.824 00:07:23.824 ' 00:07:23.824 12:27:28 version -- app/version.sh@17 -- # get_header_version major 00:07:23.824 12:27:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:23.824 12:27:28 version -- app/version.sh@14 -- # cut -f2 00:07:23.824 12:27:28 version -- app/version.sh@14 -- # tr -d '"' 00:07:23.824 12:27:28 version -- app/version.sh@17 -- # major=24 00:07:23.824 12:27:28 version -- app/version.sh@18 -- # get_header_version minor 00:07:23.824 12:27:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:23.824 12:27:28 version -- app/version.sh@14 -- # cut -f2 00:07:23.824 12:27:28 version -- app/version.sh@14 -- # tr -d '"' 00:07:23.824 12:27:28 version -- app/version.sh@18 -- # minor=9 00:07:23.824 12:27:28 version -- app/version.sh@19 -- # get_header_version patch 00:07:23.824 12:27:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:23.824 12:27:28 version -- app/version.sh@14 -- # cut -f2 00:07:23.824 12:27:28 version -- app/version.sh@14 -- # tr -d '"' 00:07:23.824 12:27:29 version -- app/version.sh@19 -- # patch=1 00:07:23.824 12:27:29 version -- app/version.sh@20 -- # get_header_version suffix 00:07:23.824 12:27:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:23.824 12:27:29 version -- app/version.sh@14 -- # cut -f2 00:07:23.824 12:27:29 version -- app/version.sh@14 -- # tr -d '"' 00:07:23.824 12:27:29 version -- app/version.sh@20 -- # suffix=-pre 00:07:23.824 12:27:29 version -- app/version.sh@22 -- # version=24.9 00:07:23.824 12:27:29 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:23.824 12:27:29 version -- app/version.sh@25 -- # version=24.9.1 00:07:23.824 12:27:29 version -- app/version.sh@28 -- # version=24.9.1rc0 00:07:23.824 12:27:29 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:23.824 12:27:29 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:23.824 12:27:29 version -- app/version.sh@30 -- # py_version=24.9.1rc0 00:07:23.824 12:27:29 version -- app/version.sh@31 -- # [[ 24.9.1rc0 == \2\4\.\9\.\1\r\c\0 ]] 00:07:23.824 00:07:23.824 real 0m0.223s 00:07:23.824 user 0m0.146s 00:07:23.824 sys 0m0.110s 00:07:23.824 12:27:29 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.824 12:27:29 version -- common/autotest_common.sh@10 -- # set +x 00:07:23.824 ************************************ 00:07:23.824 END TEST version 00:07:23.824 ************************************ 00:07:24.084 12:27:29 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:24.084 12:27:29 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:24.084 12:27:29 -- spdk/autotest.sh@194 -- # uname -s 00:07:24.084 12:27:29 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:24.084 12:27:29 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:24.084 12:27:29 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:07:24.084 12:27:29 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:07:24.084 12:27:29 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:24.085 12:27:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:24.085 12:27:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:24.085 12:27:29 -- common/autotest_common.sh@10 -- # set +x 00:07:24.085 ************************************ 00:07:24.085 START TEST spdk_dd 00:07:24.085 ************************************ 00:07:24.085 12:27:29 spdk_dd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:24.085 * Looking for test storage... 00:07:24.085 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:24.085 12:27:29 spdk_dd -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:24.085 12:27:29 spdk_dd -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:24.085 12:27:29 spdk_dd -- common/autotest_common.sh@1681 -- # lcov --version 00:07:24.085 12:27:29 spdk_dd -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:24.085 12:27:29 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:24.085 12:27:29 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:24.085 12:27:29 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:24.085 12:27:29 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:07:24.085 12:27:29 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:07:24.085 12:27:29 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:07:24.085 12:27:29 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:07:24.085 12:27:29 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:07:24.085 12:27:29 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:07:24.085 12:27:29 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:07:24.085 12:27:29 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:24.085 12:27:29 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:07:24.085 12:27:29 spdk_dd -- scripts/common.sh@345 -- # : 1 00:07:24.085 12:27:29 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:24.085 12:27:29 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:24.085 12:27:29 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:07:24.085 12:27:29 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:07:24.085 12:27:29 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:24.085 12:27:29 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:07:24.085 12:27:29 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:07:24.085 12:27:29 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:07:24.085 12:27:29 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:07:24.085 12:27:29 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:24.085 12:27:29 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:07:24.085 12:27:29 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:07:24.085 12:27:29 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:24.085 12:27:29 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:24.085 12:27:29 spdk_dd -- scripts/common.sh@368 -- # return 0 00:07:24.085 12:27:29 spdk_dd -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:24.085 12:27:29 spdk_dd -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:24.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.085 --rc genhtml_branch_coverage=1 00:07:24.085 --rc genhtml_function_coverage=1 00:07:24.085 --rc genhtml_legend=1 00:07:24.085 --rc geninfo_all_blocks=1 00:07:24.085 --rc geninfo_unexecuted_blocks=1 00:07:24.085 00:07:24.085 ' 00:07:24.085 12:27:29 spdk_dd -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:24.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.085 --rc genhtml_branch_coverage=1 00:07:24.085 --rc genhtml_function_coverage=1 00:07:24.085 --rc genhtml_legend=1 00:07:24.085 --rc geninfo_all_blocks=1 00:07:24.085 --rc geninfo_unexecuted_blocks=1 00:07:24.085 00:07:24.085 ' 00:07:24.085 12:27:29 spdk_dd -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:24.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.085 --rc genhtml_branch_coverage=1 00:07:24.085 --rc genhtml_function_coverage=1 00:07:24.085 --rc genhtml_legend=1 00:07:24.085 --rc geninfo_all_blocks=1 00:07:24.085 --rc geninfo_unexecuted_blocks=1 00:07:24.085 00:07:24.085 ' 00:07:24.085 12:27:29 spdk_dd -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:24.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.085 --rc genhtml_branch_coverage=1 00:07:24.085 --rc genhtml_function_coverage=1 00:07:24.085 --rc genhtml_legend=1 00:07:24.085 --rc geninfo_all_blocks=1 00:07:24.085 --rc geninfo_unexecuted_blocks=1 00:07:24.085 00:07:24.085 ' 00:07:24.085 12:27:29 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:24.085 12:27:29 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:07:24.085 12:27:29 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:24.085 12:27:29 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:24.085 12:27:29 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:24.085 12:27:29 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.085 12:27:29 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.085 12:27:29 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.085 12:27:29 spdk_dd -- paths/export.sh@5 -- # export PATH 00:07:24.085 12:27:29 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.085 12:27:29 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:24.344 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:24.604 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:24.604 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:24.604 12:27:29 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:07:24.604 12:27:29 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:07:24.604 12:27:29 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:07:24.604 12:27:29 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:07:24.604 12:27:29 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:07:24.604 12:27:29 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:07:24.604 12:27:29 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:07:24.604 12:27:29 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:07:24.604 12:27:29 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:07:24.604 12:27:29 spdk_dd -- scripts/common.sh@233 -- # local class 00:07:24.604 12:27:29 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:07:24.604 12:27:29 spdk_dd -- scripts/common.sh@235 -- # local progif 00:07:24.604 12:27:29 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:07:24.604 12:27:29 spdk_dd -- scripts/common.sh@236 -- # class=01 00:07:24.604 12:27:29 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:07:24.604 12:27:29 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:07:24.604 12:27:29 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:07:24.604 12:27:29 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:07:24.604 12:27:29 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:07:24.604 12:27:29 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:07:24.604 12:27:29 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:07:24.604 12:27:29 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:07:24.604 12:27:29 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:07:24.604 12:27:29 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:07:24.605 12:27:29 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:24.605 12:27:29 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:07:24.605 12:27:29 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:24.605 12:27:29 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:07:24.605 12:27:29 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:24.605 12:27:29 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:24.605 12:27:29 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:07:24.605 12:27:29 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:24.605 12:27:29 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:07:24.605 12:27:29 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:24.605 12:27:29 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:07:24.605 12:27:29 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:24.605 12:27:29 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:24.605 12:27:29 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:07:24.605 12:27:29 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:24.605 12:27:29 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:07:24.605 12:27:29 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:24.605 12:27:29 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:24.605 12:27:29 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:24.605 12:27:29 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:24.605 12:27:29 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:07:24.605 12:27:29 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:24.605 12:27:29 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:24.605 12:27:29 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:24.605 12:27:29 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:07:24.605 12:27:29 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:24.605 12:27:29 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@139 -- # local lib 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.14.0 == liburing.so.* ]] 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.0 == liburing.so.* ]] 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfu_device.so.3.0 == liburing.so.* ]] 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scsi.so.9.0 == liburing.so.* ]] 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfu_tgt.so.3.0 == liburing.so.* ]] 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fuse_dispatcher.so.1.0 == liburing.so.* ]] 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.1.0 == liburing.so.* ]] 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.605 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.16.0 == liburing.so.* ]] 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.0 == liburing.so.* ]] 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:07:24.606 * spdk_dd linked to liburing 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:24.606 12:27:29 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:07:24.606 12:27:29 spdk_dd -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:07:24.607 12:27:29 spdk_dd -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:07:24.607 12:27:29 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:07:24.607 12:27:29 spdk_dd -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:24.607 12:27:29 spdk_dd -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:07:24.607 12:27:29 spdk_dd -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:07:24.607 12:27:29 spdk_dd -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:07:24.607 12:27:29 spdk_dd -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:07:24.607 12:27:29 spdk_dd -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:07:24.607 12:27:29 spdk_dd -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=y 00:07:24.607 12:27:29 spdk_dd -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:07:24.607 12:27:29 spdk_dd -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:07:24.607 12:27:29 spdk_dd -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:07:24.607 12:27:29 spdk_dd -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:07:24.607 12:27:29 spdk_dd -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:07:24.607 12:27:29 spdk_dd -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:07:24.607 12:27:29 spdk_dd -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:07:24.607 12:27:29 spdk_dd -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:07:24.607 12:27:29 spdk_dd -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:24.607 12:27:29 spdk_dd -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:07:24.607 12:27:29 spdk_dd -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:07:24.607 12:27:29 spdk_dd -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:07:24.607 12:27:29 spdk_dd -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:07:24.607 12:27:29 spdk_dd -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:07:24.607 12:27:29 spdk_dd -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:24.607 12:27:29 spdk_dd -- common/build_config.sh@75 -- # CONFIG_FC=n 00:07:24.607 12:27:29 spdk_dd -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:07:24.607 12:27:29 spdk_dd -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:07:24.607 12:27:29 spdk_dd -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:07:24.607 12:27:29 spdk_dd -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:07:24.607 12:27:29 spdk_dd -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:07:24.607 12:27:29 spdk_dd -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:07:24.607 12:27:29 spdk_dd -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:07:24.607 12:27:29 spdk_dd -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:07:24.607 12:27:29 spdk_dd -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:07:24.607 12:27:29 spdk_dd -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:07:24.607 12:27:29 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:24.607 12:27:29 spdk_dd -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:07:24.607 12:27:29 spdk_dd -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:07:24.607 12:27:29 spdk_dd -- common/build_config.sh@89 -- # CONFIG_URING=y 00:07:24.607 12:27:29 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:07:24.607 12:27:29 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:07:24.607 12:27:29 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:07:24.607 12:27:29 spdk_dd -- dd/common.sh@153 -- # return 0 00:07:24.607 12:27:29 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:07:24.607 12:27:29 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:24.607 12:27:29 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:24.607 12:27:29 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:24.607 12:27:29 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:24.607 ************************************ 00:07:24.607 START TEST spdk_dd_basic_rw 00:07:24.607 ************************************ 00:07:24.607 12:27:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:24.867 * Looking for test storage... 00:07:24.867 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:24.867 12:27:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:24.867 12:27:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1681 -- # lcov --version 00:07:24.867 12:27:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:24.867 12:27:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:24.867 12:27:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:24.867 12:27:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:24.867 12:27:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:24.867 12:27:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:07:24.867 12:27:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:07:24.867 12:27:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:07:24.867 12:27:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:07:24.867 12:27:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:07:24.867 12:27:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:07:24.867 12:27:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:07:24.867 12:27:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:24.867 12:27:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:07:24.867 12:27:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:07:24.867 12:27:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:24.867 12:27:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:24.867 12:27:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:07:24.867 12:27:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:07:24.867 12:27:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:24.867 12:27:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:07:24.867 12:27:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:07:24.867 12:27:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:07:24.867 12:27:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:07:24.867 12:27:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:24.867 12:27:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:07:24.867 12:27:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:07:24.867 12:27:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:24.867 12:27:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:24.867 12:27:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:07:24.867 12:27:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:24.867 12:27:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:24.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.867 --rc genhtml_branch_coverage=1 00:07:24.867 --rc genhtml_function_coverage=1 00:07:24.867 --rc genhtml_legend=1 00:07:24.867 --rc geninfo_all_blocks=1 00:07:24.867 --rc geninfo_unexecuted_blocks=1 00:07:24.867 00:07:24.867 ' 00:07:24.867 12:27:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:24.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.867 --rc genhtml_branch_coverage=1 00:07:24.867 --rc genhtml_function_coverage=1 00:07:24.867 --rc genhtml_legend=1 00:07:24.867 --rc geninfo_all_blocks=1 00:07:24.867 --rc geninfo_unexecuted_blocks=1 00:07:24.867 00:07:24.867 ' 00:07:24.867 12:27:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:24.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.867 --rc genhtml_branch_coverage=1 00:07:24.867 --rc genhtml_function_coverage=1 00:07:24.867 --rc genhtml_legend=1 00:07:24.867 --rc geninfo_all_blocks=1 00:07:24.867 --rc geninfo_unexecuted_blocks=1 00:07:24.867 00:07:24.867 ' 00:07:24.867 12:27:29 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:24.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.867 --rc genhtml_branch_coverage=1 00:07:24.867 --rc genhtml_function_coverage=1 00:07:24.867 --rc genhtml_legend=1 00:07:24.867 --rc geninfo_all_blocks=1 00:07:24.867 --rc geninfo_unexecuted_blocks=1 00:07:24.867 00:07:24.867 ' 00:07:24.867 12:27:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:24.867 12:27:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:07:24.867 12:27:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:24.868 12:27:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:24.868 12:27:29 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:24.868 12:27:29 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.868 12:27:29 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.868 12:27:29 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.868 12:27:29 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:07:24.868 12:27:29 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.868 12:27:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:07:24.868 12:27:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:07:24.868 12:27:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:07:24.868 12:27:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:07:24.868 12:27:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:07:24.868 12:27:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:24.868 12:27:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:24.868 12:27:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:24.868 12:27:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:24.868 12:27:29 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:07:24.868 12:27:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:07:24.868 12:27:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:07:24.868 12:27:29 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:07:25.129 12:27:30 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:07:25.129 12:27:30 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:07:25.130 12:27:30 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:07:25.130 12:27:30 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:07:25.130 12:27:30 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:07:25.130 12:27:30 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:07:25.130 12:27:30 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:07:25.130 12:27:30 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:25.130 12:27:30 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:07:25.130 12:27:30 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:25.130 12:27:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:25.130 12:27:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:25.130 12:27:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.130 12:27:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:25.130 ************************************ 00:07:25.130 START TEST dd_bs_lt_native_bs 00:07:25.130 ************************************ 00:07:25.130 12:27:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1125 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:25.130 12:27:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # local es=0 00:07:25.130 12:27:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:25.130 12:27:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.130 12:27:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.130 12:27:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.130 12:27:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.130 12:27:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.130 12:27:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.130 12:27:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.130 12:27:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:25.130 12:27:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:25.130 { 00:07:25.130 "subsystems": [ 00:07:25.130 { 00:07:25.130 "subsystem": "bdev", 00:07:25.130 "config": [ 00:07:25.130 { 00:07:25.130 "params": { 00:07:25.130 "trtype": "pcie", 00:07:25.130 "traddr": "0000:00:10.0", 00:07:25.130 "name": "Nvme0" 00:07:25.130 }, 00:07:25.130 "method": "bdev_nvme_attach_controller" 00:07:25.130 }, 00:07:25.130 { 00:07:25.130 "method": "bdev_wait_for_examine" 00:07:25.130 } 00:07:25.130 ] 00:07:25.130 } 00:07:25.130 ] 00:07:25.130 } 00:07:25.130 [2024-11-19 12:27:30.221272] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:25.130 [2024-11-19 12:27:30.221373] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72755 ] 00:07:25.130 [2024-11-19 12:27:30.358955] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.392 [2024-11-19 12:27:30.399329] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.392 [2024-11-19 12:27:30.432143] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:25.392 [2024-11-19 12:27:30.523539] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:07:25.392 [2024-11-19 12:27:30.523612] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:25.392 [2024-11-19 12:27:30.586496] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:25.392 12:27:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # es=234 00:07:25.392 12:27:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:25.392 12:27:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@662 -- # es=106 00:07:25.651 12:27:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # case "$es" in 00:07:25.652 12:27:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@670 -- # es=1 00:07:25.652 12:27:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:25.652 00:07:25.652 real 0m0.488s 00:07:25.652 user 0m0.326s 00:07:25.652 sys 0m0.118s 00:07:25.652 12:27:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.652 ************************************ 00:07:25.652 END TEST dd_bs_lt_native_bs 00:07:25.652 ************************************ 00:07:25.652 12:27:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:07:25.652 12:27:30 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:07:25.652 12:27:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:25.652 12:27:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.652 12:27:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:25.652 ************************************ 00:07:25.652 START TEST dd_rw 00:07:25.652 ************************************ 00:07:25.652 12:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1125 -- # basic_rw 4096 00:07:25.652 12:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:07:25.652 12:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:07:25.652 12:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:07:25.652 12:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:07:25.652 12:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:25.652 12:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:25.652 12:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:25.652 12:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:25.652 12:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:25.652 12:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:25.652 12:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:25.652 12:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:25.652 12:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:25.652 12:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:25.652 12:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:25.652 12:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:25.652 12:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:25.652 12:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:26.220 12:27:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:07:26.220 12:27:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:26.220 12:27:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:26.220 12:27:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:26.220 [2024-11-19 12:27:31.379712] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:26.220 [2024-11-19 12:27:31.379814] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72786 ] 00:07:26.220 { 00:07:26.220 "subsystems": [ 00:07:26.220 { 00:07:26.220 "subsystem": "bdev", 00:07:26.220 "config": [ 00:07:26.220 { 00:07:26.220 "params": { 00:07:26.220 "trtype": "pcie", 00:07:26.220 "traddr": "0000:00:10.0", 00:07:26.220 "name": "Nvme0" 00:07:26.220 }, 00:07:26.220 "method": "bdev_nvme_attach_controller" 00:07:26.220 }, 00:07:26.220 { 00:07:26.220 "method": "bdev_wait_for_examine" 00:07:26.220 } 00:07:26.220 ] 00:07:26.220 } 00:07:26.220 ] 00:07:26.220 } 00:07:26.479 [2024-11-19 12:27:31.511590] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.479 [2024-11-19 12:27:31.542933] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.479 [2024-11-19 12:27:31.569549] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:26.479  [2024-11-19T12:27:31.999Z] Copying: 60/60 [kB] (average 29 MBps) 00:07:26.739 00:07:26.739 12:27:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:07:26.739 12:27:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:26.739 12:27:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:26.739 12:27:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:26.739 [2024-11-19 12:27:31.830987] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:26.739 [2024-11-19 12:27:31.831071] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72800 ] 00:07:26.739 { 00:07:26.739 "subsystems": [ 00:07:26.739 { 00:07:26.739 "subsystem": "bdev", 00:07:26.739 "config": [ 00:07:26.739 { 00:07:26.739 "params": { 00:07:26.739 "trtype": "pcie", 00:07:26.739 "traddr": "0000:00:10.0", 00:07:26.739 "name": "Nvme0" 00:07:26.739 }, 00:07:26.739 "method": "bdev_nvme_attach_controller" 00:07:26.739 }, 00:07:26.739 { 00:07:26.739 "method": "bdev_wait_for_examine" 00:07:26.739 } 00:07:26.739 ] 00:07:26.739 } 00:07:26.739 ] 00:07:26.739 } 00:07:26.739 [2024-11-19 12:27:31.960381] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.739 [2024-11-19 12:27:31.992535] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.022 [2024-11-19 12:27:32.023661] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:27.022  [2024-11-19T12:27:32.282Z] Copying: 60/60 [kB] (average 19 MBps) 00:07:27.022 00:07:27.022 12:27:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:27.022 12:27:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:27.022 12:27:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:27.022 12:27:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:27.022 12:27:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:27.022 12:27:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:27.022 12:27:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:27.022 12:27:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:27.022 12:27:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:27.022 12:27:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:27.022 12:27:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:27.306 { 00:07:27.306 "subsystems": [ 00:07:27.306 { 00:07:27.306 "subsystem": "bdev", 00:07:27.306 "config": [ 00:07:27.306 { 00:07:27.306 "params": { 00:07:27.306 "trtype": "pcie", 00:07:27.306 "traddr": "0000:00:10.0", 00:07:27.306 "name": "Nvme0" 00:07:27.306 }, 00:07:27.306 "method": "bdev_nvme_attach_controller" 00:07:27.306 }, 00:07:27.306 { 00:07:27.306 "method": "bdev_wait_for_examine" 00:07:27.306 } 00:07:27.306 ] 00:07:27.306 } 00:07:27.306 ] 00:07:27.306 } 00:07:27.306 [2024-11-19 12:27:32.313168] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:27.306 [2024-11-19 12:27:32.313434] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72815 ] 00:07:27.306 [2024-11-19 12:27:32.450668] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.306 [2024-11-19 12:27:32.482017] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.306 [2024-11-19 12:27:32.510981] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:27.573  [2024-11-19T12:27:32.833Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:27.573 00:07:27.573 12:27:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:27.573 12:27:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:27.573 12:27:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:27.573 12:27:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:27.573 12:27:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:27.573 12:27:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:27.573 12:27:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:28.140 12:27:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:07:28.140 12:27:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:28.140 12:27:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:28.140 12:27:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:28.140 [2024-11-19 12:27:33.347437] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:28.140 [2024-11-19 12:27:33.347739] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72834 ] 00:07:28.140 { 00:07:28.140 "subsystems": [ 00:07:28.140 { 00:07:28.140 "subsystem": "bdev", 00:07:28.140 "config": [ 00:07:28.140 { 00:07:28.140 "params": { 00:07:28.140 "trtype": "pcie", 00:07:28.140 "traddr": "0000:00:10.0", 00:07:28.140 "name": "Nvme0" 00:07:28.140 }, 00:07:28.140 "method": "bdev_nvme_attach_controller" 00:07:28.140 }, 00:07:28.140 { 00:07:28.140 "method": "bdev_wait_for_examine" 00:07:28.140 } 00:07:28.140 ] 00:07:28.140 } 00:07:28.140 ] 00:07:28.140 } 00:07:28.400 [2024-11-19 12:27:33.478704] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.400 [2024-11-19 12:27:33.510074] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.400 [2024-11-19 12:27:33.539340] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:28.400  [2024-11-19T12:27:33.920Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:28.660 00:07:28.660 12:27:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:07:28.660 12:27:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:28.660 12:27:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:28.660 12:27:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:28.660 { 00:07:28.660 "subsystems": [ 00:07:28.660 { 00:07:28.660 "subsystem": "bdev", 00:07:28.660 "config": [ 00:07:28.660 { 00:07:28.660 "params": { 00:07:28.660 "trtype": "pcie", 00:07:28.660 "traddr": "0000:00:10.0", 00:07:28.660 "name": "Nvme0" 00:07:28.660 }, 00:07:28.660 "method": "bdev_nvme_attach_controller" 00:07:28.660 }, 00:07:28.660 { 00:07:28.660 "method": "bdev_wait_for_examine" 00:07:28.660 } 00:07:28.660 ] 00:07:28.660 } 00:07:28.660 ] 00:07:28.660 } 00:07:28.660 [2024-11-19 12:27:33.813634] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:28.660 [2024-11-19 12:27:33.813738] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72848 ] 00:07:28.919 [2024-11-19 12:27:33.952446] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.919 [2024-11-19 12:27:33.983135] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.919 [2024-11-19 12:27:34.009574] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:28.919  [2024-11-19T12:27:34.439Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:29.179 00:07:29.179 12:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:29.179 12:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:29.179 12:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:29.179 12:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:29.179 12:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:29.179 12:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:29.179 12:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:29.179 12:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:29.179 12:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:29.179 12:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:29.179 12:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:29.179 { 00:07:29.179 "subsystems": [ 00:07:29.179 { 00:07:29.179 "subsystem": "bdev", 00:07:29.179 "config": [ 00:07:29.179 { 00:07:29.179 "params": { 00:07:29.179 "trtype": "pcie", 00:07:29.179 "traddr": "0000:00:10.0", 00:07:29.179 "name": "Nvme0" 00:07:29.179 }, 00:07:29.179 "method": "bdev_nvme_attach_controller" 00:07:29.179 }, 00:07:29.179 { 00:07:29.179 "method": "bdev_wait_for_examine" 00:07:29.179 } 00:07:29.179 ] 00:07:29.179 } 00:07:29.179 ] 00:07:29.179 } 00:07:29.179 [2024-11-19 12:27:34.283363] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:29.179 [2024-11-19 12:27:34.283478] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72862 ] 00:07:29.179 [2024-11-19 12:27:34.421003] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.439 [2024-11-19 12:27:34.453002] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.439 [2024-11-19 12:27:34.479519] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:29.439  [2024-11-19T12:27:34.958Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:29.698 00:07:29.698 12:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:29.698 12:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:29.698 12:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:29.698 12:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:29.698 12:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:29.698 12:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:29.698 12:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:29.698 12:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:30.267 12:27:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:07:30.267 12:27:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:30.267 12:27:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:30.267 12:27:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:30.267 [2024-11-19 12:27:35.262448] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:30.267 [2024-11-19 12:27:35.262687] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72877 ] 00:07:30.267 { 00:07:30.267 "subsystems": [ 00:07:30.267 { 00:07:30.267 "subsystem": "bdev", 00:07:30.267 "config": [ 00:07:30.267 { 00:07:30.267 "params": { 00:07:30.267 "trtype": "pcie", 00:07:30.267 "traddr": "0000:00:10.0", 00:07:30.267 "name": "Nvme0" 00:07:30.267 }, 00:07:30.267 "method": "bdev_nvme_attach_controller" 00:07:30.267 }, 00:07:30.267 { 00:07:30.267 "method": "bdev_wait_for_examine" 00:07:30.267 } 00:07:30.267 ] 00:07:30.267 } 00:07:30.267 ] 00:07:30.267 } 00:07:30.267 [2024-11-19 12:27:35.396156] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.267 [2024-11-19 12:27:35.429451] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.267 [2024-11-19 12:27:35.456839] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:30.527  [2024-11-19T12:27:35.787Z] Copying: 56/56 [kB] (average 27 MBps) 00:07:30.527 00:07:30.527 12:27:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:30.527 12:27:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:07:30.527 12:27:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:30.527 12:27:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:30.527 [2024-11-19 12:27:35.731321] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:30.527 [2024-11-19 12:27:35.731429] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72896 ] 00:07:30.527 { 00:07:30.527 "subsystems": [ 00:07:30.527 { 00:07:30.527 "subsystem": "bdev", 00:07:30.527 "config": [ 00:07:30.527 { 00:07:30.527 "params": { 00:07:30.527 "trtype": "pcie", 00:07:30.527 "traddr": "0000:00:10.0", 00:07:30.527 "name": "Nvme0" 00:07:30.527 }, 00:07:30.527 "method": "bdev_nvme_attach_controller" 00:07:30.527 }, 00:07:30.527 { 00:07:30.527 "method": "bdev_wait_for_examine" 00:07:30.527 } 00:07:30.527 ] 00:07:30.527 } 00:07:30.527 ] 00:07:30.527 } 00:07:30.787 [2024-11-19 12:27:35.871014] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.787 [2024-11-19 12:27:35.902627] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.787 [2024-11-19 12:27:35.929134] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:30.787  [2024-11-19T12:27:36.306Z] Copying: 56/56 [kB] (average 27 MBps) 00:07:31.046 00:07:31.046 12:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:31.046 12:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:31.046 12:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:31.046 12:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:31.046 12:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:31.046 12:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:31.046 12:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:31.046 12:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:31.046 12:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:31.046 12:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:31.046 12:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:31.046 [2024-11-19 12:27:36.206075] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:31.046 [2024-11-19 12:27:36.206169] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72906 ] 00:07:31.046 { 00:07:31.046 "subsystems": [ 00:07:31.046 { 00:07:31.046 "subsystem": "bdev", 00:07:31.046 "config": [ 00:07:31.046 { 00:07:31.046 "params": { 00:07:31.046 "trtype": "pcie", 00:07:31.046 "traddr": "0000:00:10.0", 00:07:31.046 "name": "Nvme0" 00:07:31.046 }, 00:07:31.046 "method": "bdev_nvme_attach_controller" 00:07:31.046 }, 00:07:31.046 { 00:07:31.046 "method": "bdev_wait_for_examine" 00:07:31.046 } 00:07:31.046 ] 00:07:31.046 } 00:07:31.046 ] 00:07:31.046 } 00:07:31.306 [2024-11-19 12:27:36.341365] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.306 [2024-11-19 12:27:36.376943] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.306 [2024-11-19 12:27:36.405146] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:31.306  [2024-11-19T12:27:36.826Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:31.566 00:07:31.566 12:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:31.566 12:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:31.566 12:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:31.566 12:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:31.566 12:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:31.566 12:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:31.566 12:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:32.134 12:27:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:07:32.134 12:27:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:32.134 12:27:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:32.134 12:27:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:32.134 [2024-11-19 12:27:37.153014] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:32.134 [2024-11-19 12:27:37.153115] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72925 ] 00:07:32.134 { 00:07:32.134 "subsystems": [ 00:07:32.134 { 00:07:32.134 "subsystem": "bdev", 00:07:32.134 "config": [ 00:07:32.134 { 00:07:32.134 "params": { 00:07:32.134 "trtype": "pcie", 00:07:32.134 "traddr": "0000:00:10.0", 00:07:32.134 "name": "Nvme0" 00:07:32.134 }, 00:07:32.134 "method": "bdev_nvme_attach_controller" 00:07:32.134 }, 00:07:32.134 { 00:07:32.134 "method": "bdev_wait_for_examine" 00:07:32.134 } 00:07:32.134 ] 00:07:32.134 } 00:07:32.134 ] 00:07:32.134 } 00:07:32.134 [2024-11-19 12:27:37.280079] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.134 [2024-11-19 12:27:37.312753] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.134 [2024-11-19 12:27:37.339431] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:32.393  [2024-11-19T12:27:37.653Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:32.393 00:07:32.393 12:27:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:07:32.393 12:27:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:32.393 12:27:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:32.393 12:27:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:32.393 [2024-11-19 12:27:37.604015] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:32.393 [2024-11-19 12:27:37.604117] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72938 ] 00:07:32.393 { 00:07:32.393 "subsystems": [ 00:07:32.393 { 00:07:32.393 "subsystem": "bdev", 00:07:32.393 "config": [ 00:07:32.393 { 00:07:32.393 "params": { 00:07:32.393 "trtype": "pcie", 00:07:32.393 "traddr": "0000:00:10.0", 00:07:32.393 "name": "Nvme0" 00:07:32.393 }, 00:07:32.393 "method": "bdev_nvme_attach_controller" 00:07:32.393 }, 00:07:32.393 { 00:07:32.393 "method": "bdev_wait_for_examine" 00:07:32.393 } 00:07:32.393 ] 00:07:32.393 } 00:07:32.393 ] 00:07:32.393 } 00:07:32.652 [2024-11-19 12:27:37.738561] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.652 [2024-11-19 12:27:37.769588] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.652 [2024-11-19 12:27:37.796095] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:32.652  [2024-11-19T12:27:38.172Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:32.912 00:07:32.912 12:27:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:32.912 12:27:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:32.912 12:27:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:32.912 12:27:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:32.912 12:27:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:32.912 12:27:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:32.912 12:27:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:32.912 12:27:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:32.912 12:27:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:32.912 12:27:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:32.912 12:27:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:32.912 { 00:07:32.912 "subsystems": [ 00:07:32.912 { 00:07:32.912 "subsystem": "bdev", 00:07:32.912 "config": [ 00:07:32.912 { 00:07:32.912 "params": { 00:07:32.912 "trtype": "pcie", 00:07:32.912 "traddr": "0000:00:10.0", 00:07:32.912 "name": "Nvme0" 00:07:32.912 }, 00:07:32.912 "method": "bdev_nvme_attach_controller" 00:07:32.912 }, 00:07:32.912 { 00:07:32.912 "method": "bdev_wait_for_examine" 00:07:32.912 } 00:07:32.912 ] 00:07:32.912 } 00:07:32.912 ] 00:07:32.912 } 00:07:32.912 [2024-11-19 12:27:38.074222] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:32.912 [2024-11-19 12:27:38.074324] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72954 ] 00:07:33.172 [2024-11-19 12:27:38.214188] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.172 [2024-11-19 12:27:38.246920] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.172 [2024-11-19 12:27:38.275449] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:33.172  [2024-11-19T12:27:38.691Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:33.431 00:07:33.431 12:27:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:33.431 12:27:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:33.431 12:27:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:33.431 12:27:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:33.431 12:27:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:33.431 12:27:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:33.431 12:27:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:33.431 12:27:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:34.000 12:27:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:07:34.000 12:27:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:34.000 12:27:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:34.000 12:27:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:34.000 { 00:07:34.000 "subsystems": [ 00:07:34.000 { 00:07:34.000 "subsystem": "bdev", 00:07:34.000 "config": [ 00:07:34.000 { 00:07:34.000 "params": { 00:07:34.000 "trtype": "pcie", 00:07:34.000 "traddr": "0000:00:10.0", 00:07:34.000 "name": "Nvme0" 00:07:34.000 }, 00:07:34.000 "method": "bdev_nvme_attach_controller" 00:07:34.000 }, 00:07:34.000 { 00:07:34.000 "method": "bdev_wait_for_examine" 00:07:34.000 } 00:07:34.000 ] 00:07:34.000 } 00:07:34.000 ] 00:07:34.000 } 00:07:34.000 [2024-11-19 12:27:39.024098] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:34.000 [2024-11-19 12:27:39.024202] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72973 ] 00:07:34.000 [2024-11-19 12:27:39.163168] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.000 [2024-11-19 12:27:39.193994] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.000 [2024-11-19 12:27:39.221061] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:34.258  [2024-11-19T12:27:39.518Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:34.258 00:07:34.258 12:27:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:07:34.258 12:27:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:34.258 12:27:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:34.258 12:27:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:34.258 [2024-11-19 12:27:39.493363] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:34.259 [2024-11-19 12:27:39.493465] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72981 ] 00:07:34.259 { 00:07:34.259 "subsystems": [ 00:07:34.259 { 00:07:34.259 "subsystem": "bdev", 00:07:34.259 "config": [ 00:07:34.259 { 00:07:34.259 "params": { 00:07:34.259 "trtype": "pcie", 00:07:34.259 "traddr": "0000:00:10.0", 00:07:34.259 "name": "Nvme0" 00:07:34.259 }, 00:07:34.259 "method": "bdev_nvme_attach_controller" 00:07:34.259 }, 00:07:34.259 { 00:07:34.259 "method": "bdev_wait_for_examine" 00:07:34.259 } 00:07:34.259 ] 00:07:34.259 } 00:07:34.259 ] 00:07:34.259 } 00:07:34.518 [2024-11-19 12:27:39.631170] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.518 [2024-11-19 12:27:39.663221] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.518 [2024-11-19 12:27:39.692408] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:34.777  [2024-11-19T12:27:40.037Z] Copying: 48/48 [kB] (average 23 MBps) 00:07:34.777 00:07:34.777 12:27:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:34.777 12:27:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:34.777 12:27:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:34.777 12:27:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:34.777 12:27:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:34.777 12:27:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:34.777 12:27:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:34.777 12:27:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:34.777 12:27:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:34.777 12:27:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:34.777 12:27:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:34.777 [2024-11-19 12:27:39.971313] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:34.777 [2024-11-19 12:27:39.971426] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73002 ] 00:07:34.777 { 00:07:34.777 "subsystems": [ 00:07:34.777 { 00:07:34.777 "subsystem": "bdev", 00:07:34.777 "config": [ 00:07:34.777 { 00:07:34.777 "params": { 00:07:34.777 "trtype": "pcie", 00:07:34.777 "traddr": "0000:00:10.0", 00:07:34.777 "name": "Nvme0" 00:07:34.777 }, 00:07:34.777 "method": "bdev_nvme_attach_controller" 00:07:34.777 }, 00:07:34.777 { 00:07:34.777 "method": "bdev_wait_for_examine" 00:07:34.777 } 00:07:34.777 ] 00:07:34.777 } 00:07:34.777 ] 00:07:34.777 } 00:07:35.036 [2024-11-19 12:27:40.110599] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.036 [2024-11-19 12:27:40.141190] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.036 [2024-11-19 12:27:40.167785] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:35.036  [2024-11-19T12:27:40.555Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:35.296 00:07:35.296 12:27:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:35.296 12:27:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:35.296 12:27:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:35.296 12:27:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:35.296 12:27:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:35.296 12:27:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:35.296 12:27:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:35.864 12:27:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:07:35.864 12:27:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:35.864 12:27:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:35.864 12:27:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:35.864 [2024-11-19 12:27:40.926474] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:35.864 [2024-11-19 12:27:40.927022] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73021 ] 00:07:35.864 { 00:07:35.864 "subsystems": [ 00:07:35.864 { 00:07:35.864 "subsystem": "bdev", 00:07:35.864 "config": [ 00:07:35.864 { 00:07:35.864 "params": { 00:07:35.864 "trtype": "pcie", 00:07:35.864 "traddr": "0000:00:10.0", 00:07:35.864 "name": "Nvme0" 00:07:35.864 }, 00:07:35.864 "method": "bdev_nvme_attach_controller" 00:07:35.864 }, 00:07:35.864 { 00:07:35.864 "method": "bdev_wait_for_examine" 00:07:35.864 } 00:07:35.864 ] 00:07:35.864 } 00:07:35.864 ] 00:07:35.864 } 00:07:35.864 [2024-11-19 12:27:41.066860] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.864 [2024-11-19 12:27:41.099750] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.124 [2024-11-19 12:27:41.127102] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:36.124  [2024-11-19T12:27:41.384Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:36.124 00:07:36.124 12:27:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:36.124 12:27:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:07:36.124 12:27:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:36.124 12:27:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:36.383 { 00:07:36.383 "subsystems": [ 00:07:36.383 { 00:07:36.383 "subsystem": "bdev", 00:07:36.383 "config": [ 00:07:36.383 { 00:07:36.383 "params": { 00:07:36.384 "trtype": "pcie", 00:07:36.384 "traddr": "0000:00:10.0", 00:07:36.384 "name": "Nvme0" 00:07:36.384 }, 00:07:36.384 "method": "bdev_nvme_attach_controller" 00:07:36.384 }, 00:07:36.384 { 00:07:36.384 "method": "bdev_wait_for_examine" 00:07:36.384 } 00:07:36.384 ] 00:07:36.384 } 00:07:36.384 ] 00:07:36.384 } 00:07:36.384 [2024-11-19 12:27:41.402437] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:36.384 [2024-11-19 12:27:41.402531] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73029 ] 00:07:36.384 [2024-11-19 12:27:41.538713] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.384 [2024-11-19 12:27:41.576778] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.384 [2024-11-19 12:27:41.605438] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:36.643  [2024-11-19T12:27:41.903Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:36.643 00:07:36.643 12:27:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:36.643 12:27:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:36.643 12:27:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:36.643 12:27:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:36.643 12:27:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:36.643 12:27:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:36.643 12:27:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:36.643 12:27:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:36.643 12:27:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:36.643 12:27:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:36.643 12:27:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:36.643 { 00:07:36.643 "subsystems": [ 00:07:36.643 { 00:07:36.643 "subsystem": "bdev", 00:07:36.643 "config": [ 00:07:36.643 { 00:07:36.643 "params": { 00:07:36.643 "trtype": "pcie", 00:07:36.643 "traddr": "0000:00:10.0", 00:07:36.643 "name": "Nvme0" 00:07:36.643 }, 00:07:36.643 "method": "bdev_nvme_attach_controller" 00:07:36.643 }, 00:07:36.643 { 00:07:36.643 "method": "bdev_wait_for_examine" 00:07:36.643 } 00:07:36.643 ] 00:07:36.643 } 00:07:36.643 ] 00:07:36.643 } 00:07:36.643 [2024-11-19 12:27:41.889059] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:36.643 [2024-11-19 12:27:41.889145] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73050 ] 00:07:36.902 [2024-11-19 12:27:42.024912] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.903 [2024-11-19 12:27:42.055705] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.903 [2024-11-19 12:27:42.081796] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:37.162  [2024-11-19T12:27:42.422Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:37.162 00:07:37.162 00:07:37.162 real 0m11.593s 00:07:37.162 user 0m8.627s 00:07:37.162 sys 0m3.541s 00:07:37.162 12:27:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.162 12:27:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:37.162 ************************************ 00:07:37.162 END TEST dd_rw 00:07:37.162 ************************************ 00:07:37.162 12:27:42 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:07:37.162 12:27:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:37.162 12:27:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:37.162 12:27:42 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:37.162 ************************************ 00:07:37.162 START TEST dd_rw_offset 00:07:37.162 ************************************ 00:07:37.162 12:27:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1125 -- # basic_offset 00:07:37.162 12:27:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:07:37.162 12:27:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:07:37.162 12:27:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:07:37.162 12:27:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:37.162 12:27:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:07:37.163 12:27:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=7l2eqv8cxcacqcsia1vnaybyruxy2tdbfeis1861w0jdyexslgy03deysokd79fixbpwa53fddci9hf1n0q7u2ch8k5da2hlmmusog92u4j9hgmcm3v3zu56lmgyl26h7wj0wcsuh9uflkv31jyhgfxgd96bniq5wth0f5nkw1sih1v98xd8lz9iplals7pj8qm4y5nl7b0k4lt0w522hxr6tb09uqvrb0j9pppek0mwu7qc0v7sn14inc0xlb171do3mfq9nqn3yhif4boje2jebtg85uoug3uwv4petdj33n5o1ncqlsuk50qqvstv19v8cq9te5r2ou796inig4g1xckiwuiuphog33gmazxtj43jnfxabigojkv8tbvzgkyyelgkb0trbu81milhjk53xnqazrekg0qd566xfoqtrx7u5kmkqxyl6l941ut9mapho2e469sbkxkil1uhzhnnp3j27ys5mahxrpfd2su4bsqod7l1n50alzwiroxfks0ttoqpkx6u1f1v09silzvintf20ezsw14yysfmhu709ucmhc60hzxmzl9nvhss55bhhie7msn3whvevti052xui5gzbm3nzf2iwnpbibn5r1d6ja2hqbv6kwnk7e3sozz1k2wt12wjv2kf10nuvaxes1d8eft43e4pgwle4xh5gvqeneko6y9kevadw7mw5msdfp7y64o187u424xtd1wt75ur7kkw05k6mfbjm926ts0whlf1rq6goi7d6qnn5wkbf9gidhwlmsglx79jjcnghjehidqz2m83q9607mhw1v015l7fe6lms01n2p95dt9niygbms9qjf2ollmnk8isx0976kvuegrdshjmwk68q4yzhpebucj75y6x5ab65cn3cdpqzn2jsgab82lhxitixkrxlk2o3fa5p9bkncl5ibt4vo0tubsw1unppwy0ublg4ursnpdxdc2c4y1lzmvdexcthc56b1mtxxehsoozm5hfetv8ucdum4rm462tavk7945wnbjnsf8oqc6xko77zeastyplcpjxeld8cl5yskevno18cmqjryxzspuqvyjxfxj8dksw56851c8huydh2ztpo5sce60ri9c0pkpefsf1evyl6vd1fmzeu70zt2zj4a4suihdgu83zw6aq3e17qwi26lg06ajcvbs2vvbmb388gdgraqn1cbuski7zvhzn3ij7opkv6vuvkblwp6jochd6ebqpwe6sngzzhgahopxktcoqxn3bqh25ji4loj3k71xgoqidybpn0qbwif9k7bk6544l3j8gvlrjmkk69fxucgsoi8rmxuuqemdthnbcfj0okon63evpj4lk60cbr1styr197tk34mbzwem506nqc7zbb8mlvjmvv3dhgf30td4eghx03n3v32qvcxmycdr47tsi5nmuc5uewx9r6oeldw4ge4pqrzn62d6adps0fc6xps5rrx89vt56kx6d1tayhr3ut56jiigewedi57ex2w4ptcarif1ofvy3azlhkq1vy6k62zw1sz80ddkkmppc51h09f6neblhca4hh8eg4zrlqc8zq8ivnj0euau1kdic38464fce8mu0i97cwkge03zzsy4eyzg6ecmxr4d8csoluvki7wm91zl0kk5p0x4veu9b6hsy6s67gbdudd1i5ovf9r0iqqd3dqvvk8ota2rx6w0itt82clu72cnhyc8psh4zhahfv95j0jn3ecgjq68ngj954gomtdxehnc403iemkhcvkqpx7ygk8od7wq9qpq4l4dzh7cdfm0vpfjpfv13c8b6eai1tf0uzu5tkafnpsqm5ys0y27rcfae93onewh27vwdo91cp3y5lbvm3e0g4mke3vxsmazzm08xuwkf44pktohrodj15zofc2jmlb8aquj1dj9dy13p4t5ao6k7yuwy6rxavytq42mz9251sq2c900k2zg0byku60r1wjqatx4ektzgprcajaud15foeagc0b865t63p1g6sh9lx6yp8oiwbr5ru40n2kepk1ei56mqbvphpt9a5csew1wtamiqs79sdly11ojibpf5qcp791a1g00hdmcuv4knzfvvh0bw1m8aukx8jvwvmzqfvgddhmqrf1g5m28abac0hogxkyjeij6p6jsaytaqy2r8zsjezk75ax1m8z45d0txwtwutugnneclapk0wvio3msghi897jdjds08sif8xc88yca3hzqendgt9cc33lr6lpelq7yf104u7zmnmflu7z7qic9wgs3mrl61b9wuhpfpuekrq5fj384pchgwfam42mdx5a7ljthbxsu0motn2qa83nuwrlg7lokfpqqa57k5xag9dwa8a2r2zt96hja8q3t5q1z1wrxvj8kzf0is6k67jq5gm419x62z32o1jub2u3piq6o45yw8y1tue0zwdsntoqnm1blpw8uzbn3ztx5eikal1c5gahwaldjfhdq9m0rptdk7xyg95nbzpbkr39d66etmrfrvo9s6vmro9y43mc8wx9r8h0quzqvdhlgn9nk8oa007qenuy26vvbw0wei05rkasa1qfdr6j3y8k3faxkz3tt2t87o6p2e7zfo133c5x8qut3tcbo4erh4s3lxe7ehtvvxkwdp2dbhvq12cvy8ey6moln9gbrriffk22fhwczy1yan6r9o51wv877soay7l4fvm0sw0i47htc44abhqpe9f1578gx5me4hlsfma2xlo9scg932akmvo2olg65skumcxeij2xhea0h2ffks9oyi6sq8b72cqvsbcn27c8slahjmdhkpfvlg5ay0vgdjh60lpcay9u5g7kuh7q9xpqws3nmojsb3bz80o66z5xndv2jpchgviqfyqze7fth0yknm0g2ox2ov79zfwmzxuyta4zzs5gw6jo0z8cgr4gc19ya7av6xpv7tlfz62796lb6bmmbj081fsf5r24we1bqay0nzt41wcp9eucb4wq4wj1fspikbf83ga8dkvjy4kukslqyamhd11c3hfeg6l8pwtzujddnav13hz618b0how0b41u6on4pquuvfntpj49i607yu5rysdw9draiwmiwusb7459a7lqdxbg9zakckyqy480g23v70l5rx6yeslemmdhti6lvd6ekvc5arpstkkrvsk01je85l2yfge03xgxpb574r7s8le0rwq00foijshgr5gdwjhzy1hdvcy7cagecvlclpe8gv3o2glut5w5r815yhuq1muxcrb318rwpbv8uonkcxlegm3ya9bgdpdysl11w5wj61wgg0345c757vyy9ks04y8efs22p9zddqcs6ofaegr2oscrgkjqin1jyp9o8sk9a9a2ye2vjpf13luy1750d51h7gotfx10n4tfbbuzjq4pilvn0xi2n9uqm9d6vydu78njqa45s6qmekxkwmm1bd8qomlf24y3zju945j5hldqu1e80uk8cb2c7nvvo73w40kxsh84naq78lqlki40mww8wmt3zas0moer8ovwhoiv22ap3jrwf13azb1p2iy36eguuo23w2cla62mmrr3741ikkdh5rm86cllng49kom1jyzn7i1m4s68vzzehmfp8liwhvf9c1klxfwio8ikc8616lr4j5agav700o7faglekvdsor0als0so01omiui82y3et3rofk5mwed5bso3g9cm4dwkohhanaeq0brjf3wpgozexypam0oj3yffjwas7mvpu0uzcc7b878z209x85bxifj3fq8j5mifw7lidrptjq6l1jnm4abc0257jvr8p0v13g0a4i3bp0ubi2a3o9q5hq6gywefxa0zo0vo8h6ezm6i2xxc11f0afs9aicnau3i4vr349tiibruk3waf95lfqvan5mj6hzcb89boakwob6rzz7mrua3ry4czfuweet9wijiwtgsl78gzv040cnygf9ajngfxqdklaex2vndxlmwu9ama652i1435p1k26tcimwm97b3nzcc96cnv509v5ynyfn8ns3e8mudq8smo2xpylo1 00:07:37.163 12:27:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:07:37.163 12:27:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:07:37.163 12:27:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:37.163 12:27:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:37.422 { 00:07:37.422 "subsystems": [ 00:07:37.422 { 00:07:37.422 "subsystem": "bdev", 00:07:37.422 "config": [ 00:07:37.422 { 00:07:37.422 "params": { 00:07:37.422 "trtype": "pcie", 00:07:37.422 "traddr": "0000:00:10.0", 00:07:37.422 "name": "Nvme0" 00:07:37.422 }, 00:07:37.422 "method": "bdev_nvme_attach_controller" 00:07:37.422 }, 00:07:37.422 { 00:07:37.422 "method": "bdev_wait_for_examine" 00:07:37.422 } 00:07:37.422 ] 00:07:37.422 } 00:07:37.422 ] 00:07:37.422 } 00:07:37.422 [2024-11-19 12:27:42.458002] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:37.422 [2024-11-19 12:27:42.458121] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73075 ] 00:07:37.422 [2024-11-19 12:27:42.597491] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.422 [2024-11-19 12:27:42.635480] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.422 [2024-11-19 12:27:42.662183] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:37.682  [2024-11-19T12:27:42.942Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:37.682 00:07:37.682 12:27:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:07:37.682 12:27:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:07:37.682 12:27:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:37.682 12:27:42 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:37.682 { 00:07:37.682 "subsystems": [ 00:07:37.682 { 00:07:37.682 "subsystem": "bdev", 00:07:37.682 "config": [ 00:07:37.682 { 00:07:37.682 "params": { 00:07:37.682 "trtype": "pcie", 00:07:37.682 "traddr": "0000:00:10.0", 00:07:37.682 "name": "Nvme0" 00:07:37.682 }, 00:07:37.682 "method": "bdev_nvme_attach_controller" 00:07:37.682 }, 00:07:37.682 { 00:07:37.682 "method": "bdev_wait_for_examine" 00:07:37.682 } 00:07:37.682 ] 00:07:37.682 } 00:07:37.682 ] 00:07:37.682 } 00:07:37.682 [2024-11-19 12:27:42.932232] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:37.682 [2024-11-19 12:27:42.932332] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73094 ] 00:07:37.942 [2024-11-19 12:27:43.070453] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.942 [2024-11-19 12:27:43.102605] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.942 [2024-11-19 12:27:43.129594] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:38.202  [2024-11-19T12:27:43.462Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:38.202 00:07:38.202 12:27:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:07:38.203 12:27:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ 7l2eqv8cxcacqcsia1vnaybyruxy2tdbfeis1861w0jdyexslgy03deysokd79fixbpwa53fddci9hf1n0q7u2ch8k5da2hlmmusog92u4j9hgmcm3v3zu56lmgyl26h7wj0wcsuh9uflkv31jyhgfxgd96bniq5wth0f5nkw1sih1v98xd8lz9iplals7pj8qm4y5nl7b0k4lt0w522hxr6tb09uqvrb0j9pppek0mwu7qc0v7sn14inc0xlb171do3mfq9nqn3yhif4boje2jebtg85uoug3uwv4petdj33n5o1ncqlsuk50qqvstv19v8cq9te5r2ou796inig4g1xckiwuiuphog33gmazxtj43jnfxabigojkv8tbvzgkyyelgkb0trbu81milhjk53xnqazrekg0qd566xfoqtrx7u5kmkqxyl6l941ut9mapho2e469sbkxkil1uhzhnnp3j27ys5mahxrpfd2su4bsqod7l1n50alzwiroxfks0ttoqpkx6u1f1v09silzvintf20ezsw14yysfmhu709ucmhc60hzxmzl9nvhss55bhhie7msn3whvevti052xui5gzbm3nzf2iwnpbibn5r1d6ja2hqbv6kwnk7e3sozz1k2wt12wjv2kf10nuvaxes1d8eft43e4pgwle4xh5gvqeneko6y9kevadw7mw5msdfp7y64o187u424xtd1wt75ur7kkw05k6mfbjm926ts0whlf1rq6goi7d6qnn5wkbf9gidhwlmsglx79jjcnghjehidqz2m83q9607mhw1v015l7fe6lms01n2p95dt9niygbms9qjf2ollmnk8isx0976kvuegrdshjmwk68q4yzhpebucj75y6x5ab65cn3cdpqzn2jsgab82lhxitixkrxlk2o3fa5p9bkncl5ibt4vo0tubsw1unppwy0ublg4ursnpdxdc2c4y1lzmvdexcthc56b1mtxxehsoozm5hfetv8ucdum4rm462tavk7945wnbjnsf8oqc6xko77zeastyplcpjxeld8cl5yskevno18cmqjryxzspuqvyjxfxj8dksw56851c8huydh2ztpo5sce60ri9c0pkpefsf1evyl6vd1fmzeu70zt2zj4a4suihdgu83zw6aq3e17qwi26lg06ajcvbs2vvbmb388gdgraqn1cbuski7zvhzn3ij7opkv6vuvkblwp6jochd6ebqpwe6sngzzhgahopxktcoqxn3bqh25ji4loj3k71xgoqidybpn0qbwif9k7bk6544l3j8gvlrjmkk69fxucgsoi8rmxuuqemdthnbcfj0okon63evpj4lk60cbr1styr197tk34mbzwem506nqc7zbb8mlvjmvv3dhgf30td4eghx03n3v32qvcxmycdr47tsi5nmuc5uewx9r6oeldw4ge4pqrzn62d6adps0fc6xps5rrx89vt56kx6d1tayhr3ut56jiigewedi57ex2w4ptcarif1ofvy3azlhkq1vy6k62zw1sz80ddkkmppc51h09f6neblhca4hh8eg4zrlqc8zq8ivnj0euau1kdic38464fce8mu0i97cwkge03zzsy4eyzg6ecmxr4d8csoluvki7wm91zl0kk5p0x4veu9b6hsy6s67gbdudd1i5ovf9r0iqqd3dqvvk8ota2rx6w0itt82clu72cnhyc8psh4zhahfv95j0jn3ecgjq68ngj954gomtdxehnc403iemkhcvkqpx7ygk8od7wq9qpq4l4dzh7cdfm0vpfjpfv13c8b6eai1tf0uzu5tkafnpsqm5ys0y27rcfae93onewh27vwdo91cp3y5lbvm3e0g4mke3vxsmazzm08xuwkf44pktohrodj15zofc2jmlb8aquj1dj9dy13p4t5ao6k7yuwy6rxavytq42mz9251sq2c900k2zg0byku60r1wjqatx4ektzgprcajaud15foeagc0b865t63p1g6sh9lx6yp8oiwbr5ru40n2kepk1ei56mqbvphpt9a5csew1wtamiqs79sdly11ojibpf5qcp791a1g00hdmcuv4knzfvvh0bw1m8aukx8jvwvmzqfvgddhmqrf1g5m28abac0hogxkyjeij6p6jsaytaqy2r8zsjezk75ax1m8z45d0txwtwutugnneclapk0wvio3msghi897jdjds08sif8xc88yca3hzqendgt9cc33lr6lpelq7yf104u7zmnmflu7z7qic9wgs3mrl61b9wuhpfpuekrq5fj384pchgwfam42mdx5a7ljthbxsu0motn2qa83nuwrlg7lokfpqqa57k5xag9dwa8a2r2zt96hja8q3t5q1z1wrxvj8kzf0is6k67jq5gm419x62z32o1jub2u3piq6o45yw8y1tue0zwdsntoqnm1blpw8uzbn3ztx5eikal1c5gahwaldjfhdq9m0rptdk7xyg95nbzpbkr39d66etmrfrvo9s6vmro9y43mc8wx9r8h0quzqvdhlgn9nk8oa007qenuy26vvbw0wei05rkasa1qfdr6j3y8k3faxkz3tt2t87o6p2e7zfo133c5x8qut3tcbo4erh4s3lxe7ehtvvxkwdp2dbhvq12cvy8ey6moln9gbrriffk22fhwczy1yan6r9o51wv877soay7l4fvm0sw0i47htc44abhqpe9f1578gx5me4hlsfma2xlo9scg932akmvo2olg65skumcxeij2xhea0h2ffks9oyi6sq8b72cqvsbcn27c8slahjmdhkpfvlg5ay0vgdjh60lpcay9u5g7kuh7q9xpqws3nmojsb3bz80o66z5xndv2jpchgviqfyqze7fth0yknm0g2ox2ov79zfwmzxuyta4zzs5gw6jo0z8cgr4gc19ya7av6xpv7tlfz62796lb6bmmbj081fsf5r24we1bqay0nzt41wcp9eucb4wq4wj1fspikbf83ga8dkvjy4kukslqyamhd11c3hfeg6l8pwtzujddnav13hz618b0how0b41u6on4pquuvfntpj49i607yu5rysdw9draiwmiwusb7459a7lqdxbg9zakckyqy480g23v70l5rx6yeslemmdhti6lvd6ekvc5arpstkkrvsk01je85l2yfge03xgxpb574r7s8le0rwq00foijshgr5gdwjhzy1hdvcy7cagecvlclpe8gv3o2glut5w5r815yhuq1muxcrb318rwpbv8uonkcxlegm3ya9bgdpdysl11w5wj61wgg0345c757vyy9ks04y8efs22p9zddqcs6ofaegr2oscrgkjqin1jyp9o8sk9a9a2ye2vjpf13luy1750d51h7gotfx10n4tfbbuzjq4pilvn0xi2n9uqm9d6vydu78njqa45s6qmekxkwmm1bd8qomlf24y3zju945j5hldqu1e80uk8cb2c7nvvo73w40kxsh84naq78lqlki40mww8wmt3zas0moer8ovwhoiv22ap3jrwf13azb1p2iy36eguuo23w2cla62mmrr3741ikkdh5rm86cllng49kom1jyzn7i1m4s68vzzehmfp8liwhvf9c1klxfwio8ikc8616lr4j5agav700o7faglekvdsor0als0so01omiui82y3et3rofk5mwed5bso3g9cm4dwkohhanaeq0brjf3wpgozexypam0oj3yffjwas7mvpu0uzcc7b878z209x85bxifj3fq8j5mifw7lidrptjq6l1jnm4abc0257jvr8p0v13g0a4i3bp0ubi2a3o9q5hq6gywefxa0zo0vo8h6ezm6i2xxc11f0afs9aicnau3i4vr349tiibruk3waf95lfqvan5mj6hzcb89boakwob6rzz7mrua3ry4czfuweet9wijiwtgsl78gzv040cnygf9ajngfxqdklaex2vndxlmwu9ama652i1435p1k26tcimwm97b3nzcc96cnv509v5ynyfn8ns3e8mudq8smo2xpylo1 == \7\l\2\e\q\v\8\c\x\c\a\c\q\c\s\i\a\1\v\n\a\y\b\y\r\u\x\y\2\t\d\b\f\e\i\s\1\8\6\1\w\0\j\d\y\e\x\s\l\g\y\0\3\d\e\y\s\o\k\d\7\9\f\i\x\b\p\w\a\5\3\f\d\d\c\i\9\h\f\1\n\0\q\7\u\2\c\h\8\k\5\d\a\2\h\l\m\m\u\s\o\g\9\2\u\4\j\9\h\g\m\c\m\3\v\3\z\u\5\6\l\m\g\y\l\2\6\h\7\w\j\0\w\c\s\u\h\9\u\f\l\k\v\3\1\j\y\h\g\f\x\g\d\9\6\b\n\i\q\5\w\t\h\0\f\5\n\k\w\1\s\i\h\1\v\9\8\x\d\8\l\z\9\i\p\l\a\l\s\7\p\j\8\q\m\4\y\5\n\l\7\b\0\k\4\l\t\0\w\5\2\2\h\x\r\6\t\b\0\9\u\q\v\r\b\0\j\9\p\p\p\e\k\0\m\w\u\7\q\c\0\v\7\s\n\1\4\i\n\c\0\x\l\b\1\7\1\d\o\3\m\f\q\9\n\q\n\3\y\h\i\f\4\b\o\j\e\2\j\e\b\t\g\8\5\u\o\u\g\3\u\w\v\4\p\e\t\d\j\3\3\n\5\o\1\n\c\q\l\s\u\k\5\0\q\q\v\s\t\v\1\9\v\8\c\q\9\t\e\5\r\2\o\u\7\9\6\i\n\i\g\4\g\1\x\c\k\i\w\u\i\u\p\h\o\g\3\3\g\m\a\z\x\t\j\4\3\j\n\f\x\a\b\i\g\o\j\k\v\8\t\b\v\z\g\k\y\y\e\l\g\k\b\0\t\r\b\u\8\1\m\i\l\h\j\k\5\3\x\n\q\a\z\r\e\k\g\0\q\d\5\6\6\x\f\o\q\t\r\x\7\u\5\k\m\k\q\x\y\l\6\l\9\4\1\u\t\9\m\a\p\h\o\2\e\4\6\9\s\b\k\x\k\i\l\1\u\h\z\h\n\n\p\3\j\2\7\y\s\5\m\a\h\x\r\p\f\d\2\s\u\4\b\s\q\o\d\7\l\1\n\5\0\a\l\z\w\i\r\o\x\f\k\s\0\t\t\o\q\p\k\x\6\u\1\f\1\v\0\9\s\i\l\z\v\i\n\t\f\2\0\e\z\s\w\1\4\y\y\s\f\m\h\u\7\0\9\u\c\m\h\c\6\0\h\z\x\m\z\l\9\n\v\h\s\s\5\5\b\h\h\i\e\7\m\s\n\3\w\h\v\e\v\t\i\0\5\2\x\u\i\5\g\z\b\m\3\n\z\f\2\i\w\n\p\b\i\b\n\5\r\1\d\6\j\a\2\h\q\b\v\6\k\w\n\k\7\e\3\s\o\z\z\1\k\2\w\t\1\2\w\j\v\2\k\f\1\0\n\u\v\a\x\e\s\1\d\8\e\f\t\4\3\e\4\p\g\w\l\e\4\x\h\5\g\v\q\e\n\e\k\o\6\y\9\k\e\v\a\d\w\7\m\w\5\m\s\d\f\p\7\y\6\4\o\1\8\7\u\4\2\4\x\t\d\1\w\t\7\5\u\r\7\k\k\w\0\5\k\6\m\f\b\j\m\9\2\6\t\s\0\w\h\l\f\1\r\q\6\g\o\i\7\d\6\q\n\n\5\w\k\b\f\9\g\i\d\h\w\l\m\s\g\l\x\7\9\j\j\c\n\g\h\j\e\h\i\d\q\z\2\m\8\3\q\9\6\0\7\m\h\w\1\v\0\1\5\l\7\f\e\6\l\m\s\0\1\n\2\p\9\5\d\t\9\n\i\y\g\b\m\s\9\q\j\f\2\o\l\l\m\n\k\8\i\s\x\0\9\7\6\k\v\u\e\g\r\d\s\h\j\m\w\k\6\8\q\4\y\z\h\p\e\b\u\c\j\7\5\y\6\x\5\a\b\6\5\c\n\3\c\d\p\q\z\n\2\j\s\g\a\b\8\2\l\h\x\i\t\i\x\k\r\x\l\k\2\o\3\f\a\5\p\9\b\k\n\c\l\5\i\b\t\4\v\o\0\t\u\b\s\w\1\u\n\p\p\w\y\0\u\b\l\g\4\u\r\s\n\p\d\x\d\c\2\c\4\y\1\l\z\m\v\d\e\x\c\t\h\c\5\6\b\1\m\t\x\x\e\h\s\o\o\z\m\5\h\f\e\t\v\8\u\c\d\u\m\4\r\m\4\6\2\t\a\v\k\7\9\4\5\w\n\b\j\n\s\f\8\o\q\c\6\x\k\o\7\7\z\e\a\s\t\y\p\l\c\p\j\x\e\l\d\8\c\l\5\y\s\k\e\v\n\o\1\8\c\m\q\j\r\y\x\z\s\p\u\q\v\y\j\x\f\x\j\8\d\k\s\w\5\6\8\5\1\c\8\h\u\y\d\h\2\z\t\p\o\5\s\c\e\6\0\r\i\9\c\0\p\k\p\e\f\s\f\1\e\v\y\l\6\v\d\1\f\m\z\e\u\7\0\z\t\2\z\j\4\a\4\s\u\i\h\d\g\u\8\3\z\w\6\a\q\3\e\1\7\q\w\i\2\6\l\g\0\6\a\j\c\v\b\s\2\v\v\b\m\b\3\8\8\g\d\g\r\a\q\n\1\c\b\u\s\k\i\7\z\v\h\z\n\3\i\j\7\o\p\k\v\6\v\u\v\k\b\l\w\p\6\j\o\c\h\d\6\e\b\q\p\w\e\6\s\n\g\z\z\h\g\a\h\o\p\x\k\t\c\o\q\x\n\3\b\q\h\2\5\j\i\4\l\o\j\3\k\7\1\x\g\o\q\i\d\y\b\p\n\0\q\b\w\i\f\9\k\7\b\k\6\5\4\4\l\3\j\8\g\v\l\r\j\m\k\k\6\9\f\x\u\c\g\s\o\i\8\r\m\x\u\u\q\e\m\d\t\h\n\b\c\f\j\0\o\k\o\n\6\3\e\v\p\j\4\l\k\6\0\c\b\r\1\s\t\y\r\1\9\7\t\k\3\4\m\b\z\w\e\m\5\0\6\n\q\c\7\z\b\b\8\m\l\v\j\m\v\v\3\d\h\g\f\3\0\t\d\4\e\g\h\x\0\3\n\3\v\3\2\q\v\c\x\m\y\c\d\r\4\7\t\s\i\5\n\m\u\c\5\u\e\w\x\9\r\6\o\e\l\d\w\4\g\e\4\p\q\r\z\n\6\2\d\6\a\d\p\s\0\f\c\6\x\p\s\5\r\r\x\8\9\v\t\5\6\k\x\6\d\1\t\a\y\h\r\3\u\t\5\6\j\i\i\g\e\w\e\d\i\5\7\e\x\2\w\4\p\t\c\a\r\i\f\1\o\f\v\y\3\a\z\l\h\k\q\1\v\y\6\k\6\2\z\w\1\s\z\8\0\d\d\k\k\m\p\p\c\5\1\h\0\9\f\6\n\e\b\l\h\c\a\4\h\h\8\e\g\4\z\r\l\q\c\8\z\q\8\i\v\n\j\0\e\u\a\u\1\k\d\i\c\3\8\4\6\4\f\c\e\8\m\u\0\i\9\7\c\w\k\g\e\0\3\z\z\s\y\4\e\y\z\g\6\e\c\m\x\r\4\d\8\c\s\o\l\u\v\k\i\7\w\m\9\1\z\l\0\k\k\5\p\0\x\4\v\e\u\9\b\6\h\s\y\6\s\6\7\g\b\d\u\d\d\1\i\5\o\v\f\9\r\0\i\q\q\d\3\d\q\v\v\k\8\o\t\a\2\r\x\6\w\0\i\t\t\8\2\c\l\u\7\2\c\n\h\y\c\8\p\s\h\4\z\h\a\h\f\v\9\5\j\0\j\n\3\e\c\g\j\q\6\8\n\g\j\9\5\4\g\o\m\t\d\x\e\h\n\c\4\0\3\i\e\m\k\h\c\v\k\q\p\x\7\y\g\k\8\o\d\7\w\q\9\q\p\q\4\l\4\d\z\h\7\c\d\f\m\0\v\p\f\j\p\f\v\1\3\c\8\b\6\e\a\i\1\t\f\0\u\z\u\5\t\k\a\f\n\p\s\q\m\5\y\s\0\y\2\7\r\c\f\a\e\9\3\o\n\e\w\h\2\7\v\w\d\o\9\1\c\p\3\y\5\l\b\v\m\3\e\0\g\4\m\k\e\3\v\x\s\m\a\z\z\m\0\8\x\u\w\k\f\4\4\p\k\t\o\h\r\o\d\j\1\5\z\o\f\c\2\j\m\l\b\8\a\q\u\j\1\d\j\9\d\y\1\3\p\4\t\5\a\o\6\k\7\y\u\w\y\6\r\x\a\v\y\t\q\4\2\m\z\9\2\5\1\s\q\2\c\9\0\0\k\2\z\g\0\b\y\k\u\6\0\r\1\w\j\q\a\t\x\4\e\k\t\z\g\p\r\c\a\j\a\u\d\1\5\f\o\e\a\g\c\0\b\8\6\5\t\6\3\p\1\g\6\s\h\9\l\x\6\y\p\8\o\i\w\b\r\5\r\u\4\0\n\2\k\e\p\k\1\e\i\5\6\m\q\b\v\p\h\p\t\9\a\5\c\s\e\w\1\w\t\a\m\i\q\s\7\9\s\d\l\y\1\1\o\j\i\b\p\f\5\q\c\p\7\9\1\a\1\g\0\0\h\d\m\c\u\v\4\k\n\z\f\v\v\h\0\b\w\1\m\8\a\u\k\x\8\j\v\w\v\m\z\q\f\v\g\d\d\h\m\q\r\f\1\g\5\m\2\8\a\b\a\c\0\h\o\g\x\k\y\j\e\i\j\6\p\6\j\s\a\y\t\a\q\y\2\r\8\z\s\j\e\z\k\7\5\a\x\1\m\8\z\4\5\d\0\t\x\w\t\w\u\t\u\g\n\n\e\c\l\a\p\k\0\w\v\i\o\3\m\s\g\h\i\8\9\7\j\d\j\d\s\0\8\s\i\f\8\x\c\8\8\y\c\a\3\h\z\q\e\n\d\g\t\9\c\c\3\3\l\r\6\l\p\e\l\q\7\y\f\1\0\4\u\7\z\m\n\m\f\l\u\7\z\7\q\i\c\9\w\g\s\3\m\r\l\6\1\b\9\w\u\h\p\f\p\u\e\k\r\q\5\f\j\3\8\4\p\c\h\g\w\f\a\m\4\2\m\d\x\5\a\7\l\j\t\h\b\x\s\u\0\m\o\t\n\2\q\a\8\3\n\u\w\r\l\g\7\l\o\k\f\p\q\q\a\5\7\k\5\x\a\g\9\d\w\a\8\a\2\r\2\z\t\9\6\h\j\a\8\q\3\t\5\q\1\z\1\w\r\x\v\j\8\k\z\f\0\i\s\6\k\6\7\j\q\5\g\m\4\1\9\x\6\2\z\3\2\o\1\j\u\b\2\u\3\p\i\q\6\o\4\5\y\w\8\y\1\t\u\e\0\z\w\d\s\n\t\o\q\n\m\1\b\l\p\w\8\u\z\b\n\3\z\t\x\5\e\i\k\a\l\1\c\5\g\a\h\w\a\l\d\j\f\h\d\q\9\m\0\r\p\t\d\k\7\x\y\g\9\5\n\b\z\p\b\k\r\3\9\d\6\6\e\t\m\r\f\r\v\o\9\s\6\v\m\r\o\9\y\4\3\m\c\8\w\x\9\r\8\h\0\q\u\z\q\v\d\h\l\g\n\9\n\k\8\o\a\0\0\7\q\e\n\u\y\2\6\v\v\b\w\0\w\e\i\0\5\r\k\a\s\a\1\q\f\d\r\6\j\3\y\8\k\3\f\a\x\k\z\3\t\t\2\t\8\7\o\6\p\2\e\7\z\f\o\1\3\3\c\5\x\8\q\u\t\3\t\c\b\o\4\e\r\h\4\s\3\l\x\e\7\e\h\t\v\v\x\k\w\d\p\2\d\b\h\v\q\1\2\c\v\y\8\e\y\6\m\o\l\n\9\g\b\r\r\i\f\f\k\2\2\f\h\w\c\z\y\1\y\a\n\6\r\9\o\5\1\w\v\8\7\7\s\o\a\y\7\l\4\f\v\m\0\s\w\0\i\4\7\h\t\c\4\4\a\b\h\q\p\e\9\f\1\5\7\8\g\x\5\m\e\4\h\l\s\f\m\a\2\x\l\o\9\s\c\g\9\3\2\a\k\m\v\o\2\o\l\g\6\5\s\k\u\m\c\x\e\i\j\2\x\h\e\a\0\h\2\f\f\k\s\9\o\y\i\6\s\q\8\b\7\2\c\q\v\s\b\c\n\2\7\c\8\s\l\a\h\j\m\d\h\k\p\f\v\l\g\5\a\y\0\v\g\d\j\h\6\0\l\p\c\a\y\9\u\5\g\7\k\u\h\7\q\9\x\p\q\w\s\3\n\m\o\j\s\b\3\b\z\8\0\o\6\6\z\5\x\n\d\v\2\j\p\c\h\g\v\i\q\f\y\q\z\e\7\f\t\h\0\y\k\n\m\0\g\2\o\x\2\o\v\7\9\z\f\w\m\z\x\u\y\t\a\4\z\z\s\5\g\w\6\j\o\0\z\8\c\g\r\4\g\c\1\9\y\a\7\a\v\6\x\p\v\7\t\l\f\z\6\2\7\9\6\l\b\6\b\m\m\b\j\0\8\1\f\s\f\5\r\2\4\w\e\1\b\q\a\y\0\n\z\t\4\1\w\c\p\9\e\u\c\b\4\w\q\4\w\j\1\f\s\p\i\k\b\f\8\3\g\a\8\d\k\v\j\y\4\k\u\k\s\l\q\y\a\m\h\d\1\1\c\3\h\f\e\g\6\l\8\p\w\t\z\u\j\d\d\n\a\v\1\3\h\z\6\1\8\b\0\h\o\w\0\b\4\1\u\6\o\n\4\p\q\u\u\v\f\n\t\p\j\4\9\i\6\0\7\y\u\5\r\y\s\d\w\9\d\r\a\i\w\m\i\w\u\s\b\7\4\5\9\a\7\l\q\d\x\b\g\9\z\a\k\c\k\y\q\y\4\8\0\g\2\3\v\7\0\l\5\r\x\6\y\e\s\l\e\m\m\d\h\t\i\6\l\v\d\6\e\k\v\c\5\a\r\p\s\t\k\k\r\v\s\k\0\1\j\e\8\5\l\2\y\f\g\e\0\3\x\g\x\p\b\5\7\4\r\7\s\8\l\e\0\r\w\q\0\0\f\o\i\j\s\h\g\r\5\g\d\w\j\h\z\y\1\h\d\v\c\y\7\c\a\g\e\c\v\l\c\l\p\e\8\g\v\3\o\2\g\l\u\t\5\w\5\r\8\1\5\y\h\u\q\1\m\u\x\c\r\b\3\1\8\r\w\p\b\v\8\u\o\n\k\c\x\l\e\g\m\3\y\a\9\b\g\d\p\d\y\s\l\1\1\w\5\w\j\6\1\w\g\g\0\3\4\5\c\7\5\7\v\y\y\9\k\s\0\4\y\8\e\f\s\2\2\p\9\z\d\d\q\c\s\6\o\f\a\e\g\r\2\o\s\c\r\g\k\j\q\i\n\1\j\y\p\9\o\8\s\k\9\a\9\a\2\y\e\2\v\j\p\f\1\3\l\u\y\1\7\5\0\d\5\1\h\7\g\o\t\f\x\1\0\n\4\t\f\b\b\u\z\j\q\4\p\i\l\v\n\0\x\i\2\n\9\u\q\m\9\d\6\v\y\d\u\7\8\n\j\q\a\4\5\s\6\q\m\e\k\x\k\w\m\m\1\b\d\8\q\o\m\l\f\2\4\y\3\z\j\u\9\4\5\j\5\h\l\d\q\u\1\e\8\0\u\k\8\c\b\2\c\7\n\v\v\o\7\3\w\4\0\k\x\s\h\8\4\n\a\q\7\8\l\q\l\k\i\4\0\m\w\w\8\w\m\t\3\z\a\s\0\m\o\e\r\8\o\v\w\h\o\i\v\2\2\a\p\3\j\r\w\f\1\3\a\z\b\1\p\2\i\y\3\6\e\g\u\u\o\2\3\w\2\c\l\a\6\2\m\m\r\r\3\7\4\1\i\k\k\d\h\5\r\m\8\6\c\l\l\n\g\4\9\k\o\m\1\j\y\z\n\7\i\1\m\4\s\6\8\v\z\z\e\h\m\f\p\8\l\i\w\h\v\f\9\c\1\k\l\x\f\w\i\o\8\i\k\c\8\6\1\6\l\r\4\j\5\a\g\a\v\7\0\0\o\7\f\a\g\l\e\k\v\d\s\o\r\0\a\l\s\0\s\o\0\1\o\m\i\u\i\8\2\y\3\e\t\3\r\o\f\k\5\m\w\e\d\5\b\s\o\3\g\9\c\m\4\d\w\k\o\h\h\a\n\a\e\q\0\b\r\j\f\3\w\p\g\o\z\e\x\y\p\a\m\0\o\j\3\y\f\f\j\w\a\s\7\m\v\p\u\0\u\z\c\c\7\b\8\7\8\z\2\0\9\x\8\5\b\x\i\f\j\3\f\q\8\j\5\m\i\f\w\7\l\i\d\r\p\t\j\q\6\l\1\j\n\m\4\a\b\c\0\2\5\7\j\v\r\8\p\0\v\1\3\g\0\a\4\i\3\b\p\0\u\b\i\2\a\3\o\9\q\5\h\q\6\g\y\w\e\f\x\a\0\z\o\0\v\o\8\h\6\e\z\m\6\i\2\x\x\c\1\1\f\0\a\f\s\9\a\i\c\n\a\u\3\i\4\v\r\3\4\9\t\i\i\b\r\u\k\3\w\a\f\9\5\l\f\q\v\a\n\5\m\j\6\h\z\c\b\8\9\b\o\a\k\w\o\b\6\r\z\z\7\m\r\u\a\3\r\y\4\c\z\f\u\w\e\e\t\9\w\i\j\i\w\t\g\s\l\7\8\g\z\v\0\4\0\c\n\y\g\f\9\a\j\n\g\f\x\q\d\k\l\a\e\x\2\v\n\d\x\l\m\w\u\9\a\m\a\6\5\2\i\1\4\3\5\p\1\k\2\6\t\c\i\m\w\m\9\7\b\3\n\z\c\c\9\6\c\n\v\5\0\9\v\5\y\n\y\f\n\8\n\s\3\e\8\m\u\d\q\8\s\m\o\2\x\p\y\l\o\1 ]] 00:07:38.203 ************************************ 00:07:38.203 END TEST dd_rw_offset 00:07:38.203 ************************************ 00:07:38.203 00:07:38.203 real 0m0.995s 00:07:38.203 user 0m0.676s 00:07:38.203 sys 0m0.381s 00:07:38.203 12:27:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.203 12:27:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:38.203 12:27:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:07:38.203 12:27:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:07:38.203 12:27:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:38.203 12:27:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:38.203 12:27:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:07:38.203 12:27:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:38.203 12:27:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:07:38.203 12:27:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:38.203 12:27:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:07:38.203 12:27:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:38.203 12:27:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:38.203 { 00:07:38.203 "subsystems": [ 00:07:38.203 { 00:07:38.203 "subsystem": "bdev", 00:07:38.203 "config": [ 00:07:38.203 { 00:07:38.203 "params": { 00:07:38.203 "trtype": "pcie", 00:07:38.203 "traddr": "0000:00:10.0", 00:07:38.203 "name": "Nvme0" 00:07:38.203 }, 00:07:38.203 "method": "bdev_nvme_attach_controller" 00:07:38.203 }, 00:07:38.203 { 00:07:38.203 "method": "bdev_wait_for_examine" 00:07:38.203 } 00:07:38.203 ] 00:07:38.203 } 00:07:38.203 ] 00:07:38.203 } 00:07:38.203 [2024-11-19 12:27:43.445661] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:38.203 [2024-11-19 12:27:43.445771] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73118 ] 00:07:38.463 [2024-11-19 12:27:43.583363] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.463 [2024-11-19 12:27:43.614308] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.463 [2024-11-19 12:27:43.643123] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:38.726  [2024-11-19T12:27:43.986Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:38.726 00:07:38.726 12:27:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:38.726 00:07:38.726 real 0m14.076s 00:07:38.726 user 0m10.151s 00:07:38.726 sys 0m4.450s 00:07:38.726 12:27:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.726 12:27:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:38.726 ************************************ 00:07:38.726 END TEST spdk_dd_basic_rw 00:07:38.726 ************************************ 00:07:38.726 12:27:43 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:38.726 12:27:43 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:38.726 12:27:43 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.726 12:27:43 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:38.726 ************************************ 00:07:38.726 START TEST spdk_dd_posix 00:07:38.726 ************************************ 00:07:38.726 12:27:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:38.987 * Looking for test storage... 00:07:38.987 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:38.987 12:27:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:38.987 12:27:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:38.987 12:27:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1681 -- # lcov --version 00:07:38.987 12:27:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:38.987 12:27:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:38.987 12:27:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:38.987 12:27:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:38.987 12:27:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:07:38.987 12:27:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:07:38.987 12:27:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:07:38.987 12:27:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:07:38.987 12:27:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:07:38.987 12:27:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:07:38.987 12:27:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:07:38.987 12:27:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:38.987 12:27:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:07:38.987 12:27:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:07:38.987 12:27:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:38.987 12:27:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:38.987 12:27:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:07:38.987 12:27:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:07:38.987 12:27:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:38.987 12:27:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:07:38.987 12:27:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:07:38.987 12:27:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:07:38.987 12:27:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:07:38.987 12:27:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:38.987 12:27:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:07:38.987 12:27:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:07:38.987 12:27:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:38.987 12:27:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:38.987 12:27:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:07:38.987 12:27:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:38.987 12:27:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:38.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.987 --rc genhtml_branch_coverage=1 00:07:38.987 --rc genhtml_function_coverage=1 00:07:38.987 --rc genhtml_legend=1 00:07:38.987 --rc geninfo_all_blocks=1 00:07:38.987 --rc geninfo_unexecuted_blocks=1 00:07:38.987 00:07:38.987 ' 00:07:38.987 12:27:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:38.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.987 --rc genhtml_branch_coverage=1 00:07:38.987 --rc genhtml_function_coverage=1 00:07:38.987 --rc genhtml_legend=1 00:07:38.987 --rc geninfo_all_blocks=1 00:07:38.987 --rc geninfo_unexecuted_blocks=1 00:07:38.987 00:07:38.987 ' 00:07:38.987 12:27:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:38.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.987 --rc genhtml_branch_coverage=1 00:07:38.987 --rc genhtml_function_coverage=1 00:07:38.987 --rc genhtml_legend=1 00:07:38.987 --rc geninfo_all_blocks=1 00:07:38.987 --rc geninfo_unexecuted_blocks=1 00:07:38.987 00:07:38.987 ' 00:07:38.987 12:27:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:38.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.987 --rc genhtml_branch_coverage=1 00:07:38.987 --rc genhtml_function_coverage=1 00:07:38.987 --rc genhtml_legend=1 00:07:38.987 --rc geninfo_all_blocks=1 00:07:38.987 --rc geninfo_unexecuted_blocks=1 00:07:38.987 00:07:38.987 ' 00:07:38.987 12:27:44 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:38.987 12:27:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:07:38.987 12:27:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:38.987 12:27:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:38.987 12:27:44 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:38.987 12:27:44 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.987 12:27:44 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.988 12:27:44 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.988 12:27:44 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:07:38.988 12:27:44 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.988 12:27:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:07:38.988 12:27:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:07:38.988 12:27:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:07:38.988 12:27:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:07:38.988 12:27:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:38.988 12:27:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:38.988 12:27:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:07:38.988 12:27:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:07:38.988 * First test run, liburing in use 00:07:38.988 12:27:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:07:38.988 12:27:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:38.988 12:27:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.988 12:27:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:38.988 ************************************ 00:07:38.988 START TEST dd_flag_append 00:07:38.988 ************************************ 00:07:38.988 12:27:44 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1125 -- # append 00:07:38.988 12:27:44 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:07:38.988 12:27:44 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:07:38.988 12:27:44 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:07:38.988 12:27:44 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:38.988 12:27:44 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:38.988 12:27:44 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=e4czftblnpw7mbug9uzpn3kki3ynqo4i 00:07:38.988 12:27:44 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:07:38.988 12:27:44 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:38.988 12:27:44 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:38.988 12:27:44 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=oulp0wfpjfn7281mn17bfzqpwqy5bwu4 00:07:38.988 12:27:44 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s e4czftblnpw7mbug9uzpn3kki3ynqo4i 00:07:38.988 12:27:44 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s oulp0wfpjfn7281mn17bfzqpwqy5bwu4 00:07:38.988 12:27:44 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:38.988 [2024-11-19 12:27:44.205445] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:38.988 [2024-11-19 12:27:44.205594] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73190 ] 00:07:39.248 [2024-11-19 12:27:44.356065] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.248 [2024-11-19 12:27:44.390307] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.248 [2024-11-19 12:27:44.416351] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:39.248  [2024-11-19T12:27:44.768Z] Copying: 32/32 [B] (average 31 kBps) 00:07:39.508 00:07:39.508 12:27:44 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ oulp0wfpjfn7281mn17bfzqpwqy5bwu4e4czftblnpw7mbug9uzpn3kki3ynqo4i == \o\u\l\p\0\w\f\p\j\f\n\7\2\8\1\m\n\1\7\b\f\z\q\p\w\q\y\5\b\w\u\4\e\4\c\z\f\t\b\l\n\p\w\7\m\b\u\g\9\u\z\p\n\3\k\k\i\3\y\n\q\o\4\i ]] 00:07:39.508 00:07:39.508 real 0m0.435s 00:07:39.508 user 0m0.215s 00:07:39.508 sys 0m0.178s 00:07:39.508 12:27:44 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:39.508 ************************************ 00:07:39.508 END TEST dd_flag_append 00:07:39.508 ************************************ 00:07:39.508 12:27:44 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:39.508 12:27:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:07:39.508 12:27:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:39.508 12:27:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:39.508 12:27:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:39.508 ************************************ 00:07:39.508 START TEST dd_flag_directory 00:07:39.508 ************************************ 00:07:39.508 12:27:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1125 -- # directory 00:07:39.508 12:27:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:39.508 12:27:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:07:39.508 12:27:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:39.509 12:27:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.509 12:27:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:39.509 12:27:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.509 12:27:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:39.509 12:27:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.509 12:27:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:39.509 12:27:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.509 12:27:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:39.509 12:27:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:39.509 [2024-11-19 12:27:44.654559] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:39.509 [2024-11-19 12:27:44.654645] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73213 ] 00:07:39.768 [2024-11-19 12:27:44.784083] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.768 [2024-11-19 12:27:44.818135] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.768 [2024-11-19 12:27:44.844265] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:39.768 [2024-11-19 12:27:44.858317] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:39.768 [2024-11-19 12:27:44.858419] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:39.768 [2024-11-19 12:27:44.858447] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:39.768 [2024-11-19 12:27:44.912496] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:39.768 12:27:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:07:39.768 12:27:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:39.768 12:27:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:07:39.768 12:27:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:07:39.768 12:27:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:07:39.768 12:27:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:39.768 12:27:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:39.768 12:27:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:07:39.768 12:27:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:39.768 12:27:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.768 12:27:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:39.768 12:27:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.768 12:27:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:39.768 12:27:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.768 12:27:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:39.768 12:27:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.768 12:27:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:39.768 12:27:44 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:40.027 [2024-11-19 12:27:45.036581] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:40.027 [2024-11-19 12:27:45.036714] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73228 ] 00:07:40.027 [2024-11-19 12:27:45.175547] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.027 [2024-11-19 12:27:45.206969] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.027 [2024-11-19 12:27:45.233723] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:40.027 [2024-11-19 12:27:45.248282] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:40.027 [2024-11-19 12:27:45.248350] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:40.027 [2024-11-19 12:27:45.248379] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:40.286 [2024-11-19 12:27:45.304495] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:40.286 12:27:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:07:40.286 12:27:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:40.286 12:27:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:07:40.286 12:27:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:07:40.286 12:27:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:07:40.286 12:27:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:40.286 00:07:40.286 real 0m0.762s 00:07:40.286 user 0m0.365s 00:07:40.286 sys 0m0.190s 00:07:40.286 12:27:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.286 12:27:45 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:07:40.286 ************************************ 00:07:40.286 END TEST dd_flag_directory 00:07:40.286 ************************************ 00:07:40.286 12:27:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:07:40.286 12:27:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:40.286 12:27:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.286 12:27:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:40.286 ************************************ 00:07:40.286 START TEST dd_flag_nofollow 00:07:40.286 ************************************ 00:07:40.286 12:27:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1125 -- # nofollow 00:07:40.286 12:27:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:40.287 12:27:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:40.287 12:27:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:40.287 12:27:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:40.287 12:27:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:40.287 12:27:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:07:40.287 12:27:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:40.287 12:27:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.287 12:27:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.287 12:27:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.287 12:27:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.287 12:27:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.287 12:27:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.287 12:27:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.287 12:27:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:40.287 12:27:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:40.287 [2024-11-19 12:27:45.482161] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:40.287 [2024-11-19 12:27:45.482272] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73251 ] 00:07:40.546 [2024-11-19 12:27:45.623130] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.546 [2024-11-19 12:27:45.656614] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.546 [2024-11-19 12:27:45.683039] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:40.546 [2024-11-19 12:27:45.697056] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:40.546 [2024-11-19 12:27:45.697121] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:40.546 [2024-11-19 12:27:45.697150] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:40.546 [2024-11-19 12:27:45.757692] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:40.805 12:27:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:07:40.805 12:27:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:40.805 12:27:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:07:40.805 12:27:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:07:40.805 12:27:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:07:40.805 12:27:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:40.805 12:27:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:40.805 12:27:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:07:40.805 12:27:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:40.805 12:27:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.805 12:27:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.805 12:27:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.805 12:27:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.805 12:27:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.805 12:27:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.805 12:27:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.805 12:27:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:40.806 12:27:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:40.806 [2024-11-19 12:27:45.883130] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:40.806 [2024-11-19 12:27:45.883228] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73266 ] 00:07:40.806 [2024-11-19 12:27:46.021977] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.806 [2024-11-19 12:27:46.052928] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.065 [2024-11-19 12:27:46.079639] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:41.065 [2024-11-19 12:27:46.093775] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:41.065 [2024-11-19 12:27:46.093836] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:41.065 [2024-11-19 12:27:46.093866] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:41.065 [2024-11-19 12:27:46.151192] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:41.065 12:27:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:07:41.065 12:27:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:41.065 12:27:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:07:41.065 12:27:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:07:41.065 12:27:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:07:41.065 12:27:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:41.065 12:27:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:07:41.065 12:27:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:07:41.065 12:27:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:41.065 12:27:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:41.065 [2024-11-19 12:27:46.279270] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:41.065 [2024-11-19 12:27:46.279382] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73268 ] 00:07:41.324 [2024-11-19 12:27:46.417999] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.324 [2024-11-19 12:27:46.449314] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.324 [2024-11-19 12:27:46.475238] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:41.324  [2024-11-19T12:27:46.843Z] Copying: 512/512 [B] (average 500 kBps) 00:07:41.583 00:07:41.584 12:27:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ 9pfmwhtwvyrajy2gs3fgtm7ie9mwmoqftktnu7b4xk3b3206jhjgturz9rnkulnuyrz4vv7sg2jkh9z73x5034f2ackxc84jdmweszpq3pig5g1l89toxjtgqs64wnws4i7fba3h1d5m1cwako93g6wn98vfjamog8jnr6o174nf4nw8kiygnteqyo9n1jjveud4x7f68j6n52gv03k21dd69b45k2uo09irrgr4f7jb8sqnjfzgus564xkfvsgmlwyu3we616zgotjnigmr3rdgeu41ya2f7ucdknc7moa7snud02hqs7brbj53lb72phdvdv44lgpf3p10m0h57zqm5fr1o5ipy3u7tvrc39fjjcxyt40ikmo42kfrw5xwyxcjgmh9ufexj4cjf148u10w7izuf5esqstczpirgwqgmgyylsrfya9r390z6f5ytcx7xgebkj8ommwcehg965mzn26e9ygztcnmdv3fcav9e0gz1gmpw2plkgrj2tao == \9\p\f\m\w\h\t\w\v\y\r\a\j\y\2\g\s\3\f\g\t\m\7\i\e\9\m\w\m\o\q\f\t\k\t\n\u\7\b\4\x\k\3\b\3\2\0\6\j\h\j\g\t\u\r\z\9\r\n\k\u\l\n\u\y\r\z\4\v\v\7\s\g\2\j\k\h\9\z\7\3\x\5\0\3\4\f\2\a\c\k\x\c\8\4\j\d\m\w\e\s\z\p\q\3\p\i\g\5\g\1\l\8\9\t\o\x\j\t\g\q\s\6\4\w\n\w\s\4\i\7\f\b\a\3\h\1\d\5\m\1\c\w\a\k\o\9\3\g\6\w\n\9\8\v\f\j\a\m\o\g\8\j\n\r\6\o\1\7\4\n\f\4\n\w\8\k\i\y\g\n\t\e\q\y\o\9\n\1\j\j\v\e\u\d\4\x\7\f\6\8\j\6\n\5\2\g\v\0\3\k\2\1\d\d\6\9\b\4\5\k\2\u\o\0\9\i\r\r\g\r\4\f\7\j\b\8\s\q\n\j\f\z\g\u\s\5\6\4\x\k\f\v\s\g\m\l\w\y\u\3\w\e\6\1\6\z\g\o\t\j\n\i\g\m\r\3\r\d\g\e\u\4\1\y\a\2\f\7\u\c\d\k\n\c\7\m\o\a\7\s\n\u\d\0\2\h\q\s\7\b\r\b\j\5\3\l\b\7\2\p\h\d\v\d\v\4\4\l\g\p\f\3\p\1\0\m\0\h\5\7\z\q\m\5\f\r\1\o\5\i\p\y\3\u\7\t\v\r\c\3\9\f\j\j\c\x\y\t\4\0\i\k\m\o\4\2\k\f\r\w\5\x\w\y\x\c\j\g\m\h\9\u\f\e\x\j\4\c\j\f\1\4\8\u\1\0\w\7\i\z\u\f\5\e\s\q\s\t\c\z\p\i\r\g\w\q\g\m\g\y\y\l\s\r\f\y\a\9\r\3\9\0\z\6\f\5\y\t\c\x\7\x\g\e\b\k\j\8\o\m\m\w\c\e\h\g\9\6\5\m\z\n\2\6\e\9\y\g\z\t\c\n\m\d\v\3\f\c\a\v\9\e\0\g\z\1\g\m\p\w\2\p\l\k\g\r\j\2\t\a\o ]] 00:07:41.584 00:07:41.584 real 0m1.194s 00:07:41.584 user 0m0.599s 00:07:41.584 sys 0m0.349s 00:07:41.584 12:27:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.584 12:27:46 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:41.584 ************************************ 00:07:41.584 END TEST dd_flag_nofollow 00:07:41.584 ************************************ 00:07:41.584 12:27:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:07:41.584 12:27:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:41.584 12:27:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.584 12:27:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:41.584 ************************************ 00:07:41.584 START TEST dd_flag_noatime 00:07:41.584 ************************************ 00:07:41.584 12:27:46 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1125 -- # noatime 00:07:41.584 12:27:46 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:07:41.584 12:27:46 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:07:41.584 12:27:46 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:07:41.584 12:27:46 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:07:41.584 12:27:46 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:41.584 12:27:46 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:41.584 12:27:46 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1732019266 00:07:41.584 12:27:46 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:41.584 12:27:46 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1732019266 00:07:41.584 12:27:46 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:07:42.521 12:27:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:42.521 [2024-11-19 12:27:47.732714] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:42.521 [2024-11-19 12:27:47.732801] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73305 ] 00:07:42.780 [2024-11-19 12:27:47.864334] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.780 [2024-11-19 12:27:47.897031] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.780 [2024-11-19 12:27:47.923213] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:42.780  [2024-11-19T12:27:48.300Z] Copying: 512/512 [B] (average 500 kBps) 00:07:43.040 00:07:43.040 12:27:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:43.040 12:27:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1732019266 )) 00:07:43.040 12:27:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:43.040 12:27:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1732019266 )) 00:07:43.040 12:27:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:43.040 [2024-11-19 12:27:48.129322] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:43.040 [2024-11-19 12:27:48.129429] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73324 ] 00:07:43.040 [2024-11-19 12:27:48.267367] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.300 [2024-11-19 12:27:48.299584] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.300 [2024-11-19 12:27:48.325669] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:43.300  [2024-11-19T12:27:48.560Z] Copying: 512/512 [B] (average 500 kBps) 00:07:43.300 00:07:43.300 12:27:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:43.300 12:27:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1732019268 )) 00:07:43.300 00:07:43.300 real 0m1.802s 00:07:43.300 user 0m0.388s 00:07:43.300 sys 0m0.337s 00:07:43.300 12:27:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:43.300 ************************************ 00:07:43.300 END TEST dd_flag_noatime 00:07:43.300 ************************************ 00:07:43.300 12:27:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:43.300 12:27:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:07:43.300 12:27:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:43.300 12:27:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:43.300 12:27:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:43.300 ************************************ 00:07:43.300 START TEST dd_flags_misc 00:07:43.300 ************************************ 00:07:43.300 12:27:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1125 -- # io 00:07:43.300 12:27:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:43.300 12:27:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:43.300 12:27:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:43.300 12:27:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:43.300 12:27:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:43.300 12:27:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:43.300 12:27:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:43.300 12:27:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:43.300 12:27:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:43.560 [2024-11-19 12:27:48.584893] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:43.560 [2024-11-19 12:27:48.584987] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73347 ] 00:07:43.560 [2024-11-19 12:27:48.723260] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.560 [2024-11-19 12:27:48.754809] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.560 [2024-11-19 12:27:48.780987] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:43.560  [2024-11-19T12:27:49.079Z] Copying: 512/512 [B] (average 500 kBps) 00:07:43.820 00:07:43.820 12:27:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ qeedgavisq3mq4ssfovuau40s3pbtxtvgq2b5vdkwdp1a2tc7s79uca8ppb82lkibq60zaqkzdsoov9dmvy6a9j7yvaybhtqm3m5ia3femwzqef657ddvppsyz1eroxy8bomseq4ctjfiy18uqmr9x5nsgmtwkl68r1ikyyj0z1kxb4iuops7nxsauz2vgcoc0pd7mppbzcp1xoqv7fe02j1txew282b7jjl7c7tjhjmcamlcunce000mmrfrm85elw6kbq3fz6b14jag8dz1jw5irofu7gx6nqdjpscy5axo54zrqi6fxku3lan6x9l6b490gd4m1j6ow2yi3hsponj9xscrikq68krd3ipg8li9br9ojgzhfdaxn6g3vwc3cmm1nxle0ylzudrrh4bsus3fzc1secb2ov0255oa887nbfwo9zhyeynu97eiwuup8nbd4vfyt2xkh8f9zor0d1b809shhlwbgq4s1veeh4fbj6fs5n4qc5guyx3i0z7 == \q\e\e\d\g\a\v\i\s\q\3\m\q\4\s\s\f\o\v\u\a\u\4\0\s\3\p\b\t\x\t\v\g\q\2\b\5\v\d\k\w\d\p\1\a\2\t\c\7\s\7\9\u\c\a\8\p\p\b\8\2\l\k\i\b\q\6\0\z\a\q\k\z\d\s\o\o\v\9\d\m\v\y\6\a\9\j\7\y\v\a\y\b\h\t\q\m\3\m\5\i\a\3\f\e\m\w\z\q\e\f\6\5\7\d\d\v\p\p\s\y\z\1\e\r\o\x\y\8\b\o\m\s\e\q\4\c\t\j\f\i\y\1\8\u\q\m\r\9\x\5\n\s\g\m\t\w\k\l\6\8\r\1\i\k\y\y\j\0\z\1\k\x\b\4\i\u\o\p\s\7\n\x\s\a\u\z\2\v\g\c\o\c\0\p\d\7\m\p\p\b\z\c\p\1\x\o\q\v\7\f\e\0\2\j\1\t\x\e\w\2\8\2\b\7\j\j\l\7\c\7\t\j\h\j\m\c\a\m\l\c\u\n\c\e\0\0\0\m\m\r\f\r\m\8\5\e\l\w\6\k\b\q\3\f\z\6\b\1\4\j\a\g\8\d\z\1\j\w\5\i\r\o\f\u\7\g\x\6\n\q\d\j\p\s\c\y\5\a\x\o\5\4\z\r\q\i\6\f\x\k\u\3\l\a\n\6\x\9\l\6\b\4\9\0\g\d\4\m\1\j\6\o\w\2\y\i\3\h\s\p\o\n\j\9\x\s\c\r\i\k\q\6\8\k\r\d\3\i\p\g\8\l\i\9\b\r\9\o\j\g\z\h\f\d\a\x\n\6\g\3\v\w\c\3\c\m\m\1\n\x\l\e\0\y\l\z\u\d\r\r\h\4\b\s\u\s\3\f\z\c\1\s\e\c\b\2\o\v\0\2\5\5\o\a\8\8\7\n\b\f\w\o\9\z\h\y\e\y\n\u\9\7\e\i\w\u\u\p\8\n\b\d\4\v\f\y\t\2\x\k\h\8\f\9\z\o\r\0\d\1\b\8\0\9\s\h\h\l\w\b\g\q\4\s\1\v\e\e\h\4\f\b\j\6\f\s\5\n\4\q\c\5\g\u\y\x\3\i\0\z\7 ]] 00:07:43.820 12:27:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:43.820 12:27:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:43.820 [2024-11-19 12:27:48.960530] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:43.820 [2024-11-19 12:27:48.960632] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73362 ] 00:07:44.080 [2024-11-19 12:27:49.089626] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.080 [2024-11-19 12:27:49.120771] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.080 [2024-11-19 12:27:49.146492] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:44.080  [2024-11-19T12:27:49.340Z] Copying: 512/512 [B] (average 500 kBps) 00:07:44.080 00:07:44.080 12:27:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ qeedgavisq3mq4ssfovuau40s3pbtxtvgq2b5vdkwdp1a2tc7s79uca8ppb82lkibq60zaqkzdsoov9dmvy6a9j7yvaybhtqm3m5ia3femwzqef657ddvppsyz1eroxy8bomseq4ctjfiy18uqmr9x5nsgmtwkl68r1ikyyj0z1kxb4iuops7nxsauz2vgcoc0pd7mppbzcp1xoqv7fe02j1txew282b7jjl7c7tjhjmcamlcunce000mmrfrm85elw6kbq3fz6b14jag8dz1jw5irofu7gx6nqdjpscy5axo54zrqi6fxku3lan6x9l6b490gd4m1j6ow2yi3hsponj9xscrikq68krd3ipg8li9br9ojgzhfdaxn6g3vwc3cmm1nxle0ylzudrrh4bsus3fzc1secb2ov0255oa887nbfwo9zhyeynu97eiwuup8nbd4vfyt2xkh8f9zor0d1b809shhlwbgq4s1veeh4fbj6fs5n4qc5guyx3i0z7 == \q\e\e\d\g\a\v\i\s\q\3\m\q\4\s\s\f\o\v\u\a\u\4\0\s\3\p\b\t\x\t\v\g\q\2\b\5\v\d\k\w\d\p\1\a\2\t\c\7\s\7\9\u\c\a\8\p\p\b\8\2\l\k\i\b\q\6\0\z\a\q\k\z\d\s\o\o\v\9\d\m\v\y\6\a\9\j\7\y\v\a\y\b\h\t\q\m\3\m\5\i\a\3\f\e\m\w\z\q\e\f\6\5\7\d\d\v\p\p\s\y\z\1\e\r\o\x\y\8\b\o\m\s\e\q\4\c\t\j\f\i\y\1\8\u\q\m\r\9\x\5\n\s\g\m\t\w\k\l\6\8\r\1\i\k\y\y\j\0\z\1\k\x\b\4\i\u\o\p\s\7\n\x\s\a\u\z\2\v\g\c\o\c\0\p\d\7\m\p\p\b\z\c\p\1\x\o\q\v\7\f\e\0\2\j\1\t\x\e\w\2\8\2\b\7\j\j\l\7\c\7\t\j\h\j\m\c\a\m\l\c\u\n\c\e\0\0\0\m\m\r\f\r\m\8\5\e\l\w\6\k\b\q\3\f\z\6\b\1\4\j\a\g\8\d\z\1\j\w\5\i\r\o\f\u\7\g\x\6\n\q\d\j\p\s\c\y\5\a\x\o\5\4\z\r\q\i\6\f\x\k\u\3\l\a\n\6\x\9\l\6\b\4\9\0\g\d\4\m\1\j\6\o\w\2\y\i\3\h\s\p\o\n\j\9\x\s\c\r\i\k\q\6\8\k\r\d\3\i\p\g\8\l\i\9\b\r\9\o\j\g\z\h\f\d\a\x\n\6\g\3\v\w\c\3\c\m\m\1\n\x\l\e\0\y\l\z\u\d\r\r\h\4\b\s\u\s\3\f\z\c\1\s\e\c\b\2\o\v\0\2\5\5\o\a\8\8\7\n\b\f\w\o\9\z\h\y\e\y\n\u\9\7\e\i\w\u\u\p\8\n\b\d\4\v\f\y\t\2\x\k\h\8\f\9\z\o\r\0\d\1\b\8\0\9\s\h\h\l\w\b\g\q\4\s\1\v\e\e\h\4\f\b\j\6\f\s\5\n\4\q\c\5\g\u\y\x\3\i\0\z\7 ]] 00:07:44.080 12:27:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:44.080 12:27:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:44.080 [2024-11-19 12:27:49.336194] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:44.080 [2024-11-19 12:27:49.336297] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73366 ] 00:07:44.342 [2024-11-19 12:27:49.470701] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.342 [2024-11-19 12:27:49.505334] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.342 [2024-11-19 12:27:49.531535] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:44.342  [2024-11-19T12:27:49.861Z] Copying: 512/512 [B] (average 100 kBps) 00:07:44.601 00:07:44.601 12:27:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ qeedgavisq3mq4ssfovuau40s3pbtxtvgq2b5vdkwdp1a2tc7s79uca8ppb82lkibq60zaqkzdsoov9dmvy6a9j7yvaybhtqm3m5ia3femwzqef657ddvppsyz1eroxy8bomseq4ctjfiy18uqmr9x5nsgmtwkl68r1ikyyj0z1kxb4iuops7nxsauz2vgcoc0pd7mppbzcp1xoqv7fe02j1txew282b7jjl7c7tjhjmcamlcunce000mmrfrm85elw6kbq3fz6b14jag8dz1jw5irofu7gx6nqdjpscy5axo54zrqi6fxku3lan6x9l6b490gd4m1j6ow2yi3hsponj9xscrikq68krd3ipg8li9br9ojgzhfdaxn6g3vwc3cmm1nxle0ylzudrrh4bsus3fzc1secb2ov0255oa887nbfwo9zhyeynu97eiwuup8nbd4vfyt2xkh8f9zor0d1b809shhlwbgq4s1veeh4fbj6fs5n4qc5guyx3i0z7 == \q\e\e\d\g\a\v\i\s\q\3\m\q\4\s\s\f\o\v\u\a\u\4\0\s\3\p\b\t\x\t\v\g\q\2\b\5\v\d\k\w\d\p\1\a\2\t\c\7\s\7\9\u\c\a\8\p\p\b\8\2\l\k\i\b\q\6\0\z\a\q\k\z\d\s\o\o\v\9\d\m\v\y\6\a\9\j\7\y\v\a\y\b\h\t\q\m\3\m\5\i\a\3\f\e\m\w\z\q\e\f\6\5\7\d\d\v\p\p\s\y\z\1\e\r\o\x\y\8\b\o\m\s\e\q\4\c\t\j\f\i\y\1\8\u\q\m\r\9\x\5\n\s\g\m\t\w\k\l\6\8\r\1\i\k\y\y\j\0\z\1\k\x\b\4\i\u\o\p\s\7\n\x\s\a\u\z\2\v\g\c\o\c\0\p\d\7\m\p\p\b\z\c\p\1\x\o\q\v\7\f\e\0\2\j\1\t\x\e\w\2\8\2\b\7\j\j\l\7\c\7\t\j\h\j\m\c\a\m\l\c\u\n\c\e\0\0\0\m\m\r\f\r\m\8\5\e\l\w\6\k\b\q\3\f\z\6\b\1\4\j\a\g\8\d\z\1\j\w\5\i\r\o\f\u\7\g\x\6\n\q\d\j\p\s\c\y\5\a\x\o\5\4\z\r\q\i\6\f\x\k\u\3\l\a\n\6\x\9\l\6\b\4\9\0\g\d\4\m\1\j\6\o\w\2\y\i\3\h\s\p\o\n\j\9\x\s\c\r\i\k\q\6\8\k\r\d\3\i\p\g\8\l\i\9\b\r\9\o\j\g\z\h\f\d\a\x\n\6\g\3\v\w\c\3\c\m\m\1\n\x\l\e\0\y\l\z\u\d\r\r\h\4\b\s\u\s\3\f\z\c\1\s\e\c\b\2\o\v\0\2\5\5\o\a\8\8\7\n\b\f\w\o\9\z\h\y\e\y\n\u\9\7\e\i\w\u\u\p\8\n\b\d\4\v\f\y\t\2\x\k\h\8\f\9\z\o\r\0\d\1\b\8\0\9\s\h\h\l\w\b\g\q\4\s\1\v\e\e\h\4\f\b\j\6\f\s\5\n\4\q\c\5\g\u\y\x\3\i\0\z\7 ]] 00:07:44.601 12:27:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:44.601 12:27:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:44.601 [2024-11-19 12:27:49.733935] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:44.601 [2024-11-19 12:27:49.734043] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73370 ] 00:07:44.860 [2024-11-19 12:27:49.869858] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.860 [2024-11-19 12:27:49.904734] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.860 [2024-11-19 12:27:49.936231] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:44.860  [2024-11-19T12:27:50.120Z] Copying: 512/512 [B] (average 250 kBps) 00:07:44.860 00:07:44.860 12:27:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ qeedgavisq3mq4ssfovuau40s3pbtxtvgq2b5vdkwdp1a2tc7s79uca8ppb82lkibq60zaqkzdsoov9dmvy6a9j7yvaybhtqm3m5ia3femwzqef657ddvppsyz1eroxy8bomseq4ctjfiy18uqmr9x5nsgmtwkl68r1ikyyj0z1kxb4iuops7nxsauz2vgcoc0pd7mppbzcp1xoqv7fe02j1txew282b7jjl7c7tjhjmcamlcunce000mmrfrm85elw6kbq3fz6b14jag8dz1jw5irofu7gx6nqdjpscy5axo54zrqi6fxku3lan6x9l6b490gd4m1j6ow2yi3hsponj9xscrikq68krd3ipg8li9br9ojgzhfdaxn6g3vwc3cmm1nxle0ylzudrrh4bsus3fzc1secb2ov0255oa887nbfwo9zhyeynu97eiwuup8nbd4vfyt2xkh8f9zor0d1b809shhlwbgq4s1veeh4fbj6fs5n4qc5guyx3i0z7 == \q\e\e\d\g\a\v\i\s\q\3\m\q\4\s\s\f\o\v\u\a\u\4\0\s\3\p\b\t\x\t\v\g\q\2\b\5\v\d\k\w\d\p\1\a\2\t\c\7\s\7\9\u\c\a\8\p\p\b\8\2\l\k\i\b\q\6\0\z\a\q\k\z\d\s\o\o\v\9\d\m\v\y\6\a\9\j\7\y\v\a\y\b\h\t\q\m\3\m\5\i\a\3\f\e\m\w\z\q\e\f\6\5\7\d\d\v\p\p\s\y\z\1\e\r\o\x\y\8\b\o\m\s\e\q\4\c\t\j\f\i\y\1\8\u\q\m\r\9\x\5\n\s\g\m\t\w\k\l\6\8\r\1\i\k\y\y\j\0\z\1\k\x\b\4\i\u\o\p\s\7\n\x\s\a\u\z\2\v\g\c\o\c\0\p\d\7\m\p\p\b\z\c\p\1\x\o\q\v\7\f\e\0\2\j\1\t\x\e\w\2\8\2\b\7\j\j\l\7\c\7\t\j\h\j\m\c\a\m\l\c\u\n\c\e\0\0\0\m\m\r\f\r\m\8\5\e\l\w\6\k\b\q\3\f\z\6\b\1\4\j\a\g\8\d\z\1\j\w\5\i\r\o\f\u\7\g\x\6\n\q\d\j\p\s\c\y\5\a\x\o\5\4\z\r\q\i\6\f\x\k\u\3\l\a\n\6\x\9\l\6\b\4\9\0\g\d\4\m\1\j\6\o\w\2\y\i\3\h\s\p\o\n\j\9\x\s\c\r\i\k\q\6\8\k\r\d\3\i\p\g\8\l\i\9\b\r\9\o\j\g\z\h\f\d\a\x\n\6\g\3\v\w\c\3\c\m\m\1\n\x\l\e\0\y\l\z\u\d\r\r\h\4\b\s\u\s\3\f\z\c\1\s\e\c\b\2\o\v\0\2\5\5\o\a\8\8\7\n\b\f\w\o\9\z\h\y\e\y\n\u\9\7\e\i\w\u\u\p\8\n\b\d\4\v\f\y\t\2\x\k\h\8\f\9\z\o\r\0\d\1\b\8\0\9\s\h\h\l\w\b\g\q\4\s\1\v\e\e\h\4\f\b\j\6\f\s\5\n\4\q\c\5\g\u\y\x\3\i\0\z\7 ]] 00:07:44.860 12:27:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:44.860 12:27:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:44.860 12:27:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:44.860 12:27:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:44.860 12:27:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:44.860 12:27:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:45.119 [2024-11-19 12:27:50.150932] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:45.119 [2024-11-19 12:27:50.151031] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73385 ] 00:07:45.119 [2024-11-19 12:27:50.287864] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.119 [2024-11-19 12:27:50.319194] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.119 [2024-11-19 12:27:50.346780] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:45.119  [2024-11-19T12:27:50.639Z] Copying: 512/512 [B] (average 500 kBps) 00:07:45.379 00:07:45.379 12:27:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ u8pytkwcc15vsf6mfecay89sb3nexbybsu1b3jms31cyonryu9naneld5r8lc8ibpo960xc9h5cwx2ht585gfol0ndofs4uji4vhi21czsaccfv3zyxd5bqs61glahk4jqg1gb5xcdcp76h3u8tk3g7s7aagnjpg9mhxskb1ebhkhqkxk5nh8s9s0ucna0eppckhl8p29wr4wxlb7vxn4ojydyvzu08clx6axvyiekaba5oo66rwuy3d7kchadtrpe0zvy9gz37rng9dixmke45bljnb46avws7z5zn6girut1jt7de5u7uee8cfar3s2qzld56g368sdd4g04lkj9y3pueqcj7o6qhee4t9cdin7nuv29dtw10f4jhbstvtustgu5aaoxiex4b35b9fesna2c5ap1d5exzumcc8ton2jjwny7zp1ufqy398yz9c5afmx7o9pxk9c9rvu1e3fn6rna0a7sqiylg8v9u1owpusex3g9m0lr8gxdiovxyw == \u\8\p\y\t\k\w\c\c\1\5\v\s\f\6\m\f\e\c\a\y\8\9\s\b\3\n\e\x\b\y\b\s\u\1\b\3\j\m\s\3\1\c\y\o\n\r\y\u\9\n\a\n\e\l\d\5\r\8\l\c\8\i\b\p\o\9\6\0\x\c\9\h\5\c\w\x\2\h\t\5\8\5\g\f\o\l\0\n\d\o\f\s\4\u\j\i\4\v\h\i\2\1\c\z\s\a\c\c\f\v\3\z\y\x\d\5\b\q\s\6\1\g\l\a\h\k\4\j\q\g\1\g\b\5\x\c\d\c\p\7\6\h\3\u\8\t\k\3\g\7\s\7\a\a\g\n\j\p\g\9\m\h\x\s\k\b\1\e\b\h\k\h\q\k\x\k\5\n\h\8\s\9\s\0\u\c\n\a\0\e\p\p\c\k\h\l\8\p\2\9\w\r\4\w\x\l\b\7\v\x\n\4\o\j\y\d\y\v\z\u\0\8\c\l\x\6\a\x\v\y\i\e\k\a\b\a\5\o\o\6\6\r\w\u\y\3\d\7\k\c\h\a\d\t\r\p\e\0\z\v\y\9\g\z\3\7\r\n\g\9\d\i\x\m\k\e\4\5\b\l\j\n\b\4\6\a\v\w\s\7\z\5\z\n\6\g\i\r\u\t\1\j\t\7\d\e\5\u\7\u\e\e\8\c\f\a\r\3\s\2\q\z\l\d\5\6\g\3\6\8\s\d\d\4\g\0\4\l\k\j\9\y\3\p\u\e\q\c\j\7\o\6\q\h\e\e\4\t\9\c\d\i\n\7\n\u\v\2\9\d\t\w\1\0\f\4\j\h\b\s\t\v\t\u\s\t\g\u\5\a\a\o\x\i\e\x\4\b\3\5\b\9\f\e\s\n\a\2\c\5\a\p\1\d\5\e\x\z\u\m\c\c\8\t\o\n\2\j\j\w\n\y\7\z\p\1\u\f\q\y\3\9\8\y\z\9\c\5\a\f\m\x\7\o\9\p\x\k\9\c\9\r\v\u\1\e\3\f\n\6\r\n\a\0\a\7\s\q\i\y\l\g\8\v\9\u\1\o\w\p\u\s\e\x\3\g\9\m\0\l\r\8\g\x\d\i\o\v\x\y\w ]] 00:07:45.379 12:27:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:45.379 12:27:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:45.379 [2024-11-19 12:27:50.542267] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:45.379 [2024-11-19 12:27:50.542378] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73389 ] 00:07:45.638 [2024-11-19 12:27:50.681388] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.638 [2024-11-19 12:27:50.714195] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.638 [2024-11-19 12:27:50.740557] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:45.638  [2024-11-19T12:27:50.898Z] Copying: 512/512 [B] (average 500 kBps) 00:07:45.638 00:07:45.638 12:27:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ u8pytkwcc15vsf6mfecay89sb3nexbybsu1b3jms31cyonryu9naneld5r8lc8ibpo960xc9h5cwx2ht585gfol0ndofs4uji4vhi21czsaccfv3zyxd5bqs61glahk4jqg1gb5xcdcp76h3u8tk3g7s7aagnjpg9mhxskb1ebhkhqkxk5nh8s9s0ucna0eppckhl8p29wr4wxlb7vxn4ojydyvzu08clx6axvyiekaba5oo66rwuy3d7kchadtrpe0zvy9gz37rng9dixmke45bljnb46avws7z5zn6girut1jt7de5u7uee8cfar3s2qzld56g368sdd4g04lkj9y3pueqcj7o6qhee4t9cdin7nuv29dtw10f4jhbstvtustgu5aaoxiex4b35b9fesna2c5ap1d5exzumcc8ton2jjwny7zp1ufqy398yz9c5afmx7o9pxk9c9rvu1e3fn6rna0a7sqiylg8v9u1owpusex3g9m0lr8gxdiovxyw == \u\8\p\y\t\k\w\c\c\1\5\v\s\f\6\m\f\e\c\a\y\8\9\s\b\3\n\e\x\b\y\b\s\u\1\b\3\j\m\s\3\1\c\y\o\n\r\y\u\9\n\a\n\e\l\d\5\r\8\l\c\8\i\b\p\o\9\6\0\x\c\9\h\5\c\w\x\2\h\t\5\8\5\g\f\o\l\0\n\d\o\f\s\4\u\j\i\4\v\h\i\2\1\c\z\s\a\c\c\f\v\3\z\y\x\d\5\b\q\s\6\1\g\l\a\h\k\4\j\q\g\1\g\b\5\x\c\d\c\p\7\6\h\3\u\8\t\k\3\g\7\s\7\a\a\g\n\j\p\g\9\m\h\x\s\k\b\1\e\b\h\k\h\q\k\x\k\5\n\h\8\s\9\s\0\u\c\n\a\0\e\p\p\c\k\h\l\8\p\2\9\w\r\4\w\x\l\b\7\v\x\n\4\o\j\y\d\y\v\z\u\0\8\c\l\x\6\a\x\v\y\i\e\k\a\b\a\5\o\o\6\6\r\w\u\y\3\d\7\k\c\h\a\d\t\r\p\e\0\z\v\y\9\g\z\3\7\r\n\g\9\d\i\x\m\k\e\4\5\b\l\j\n\b\4\6\a\v\w\s\7\z\5\z\n\6\g\i\r\u\t\1\j\t\7\d\e\5\u\7\u\e\e\8\c\f\a\r\3\s\2\q\z\l\d\5\6\g\3\6\8\s\d\d\4\g\0\4\l\k\j\9\y\3\p\u\e\q\c\j\7\o\6\q\h\e\e\4\t\9\c\d\i\n\7\n\u\v\2\9\d\t\w\1\0\f\4\j\h\b\s\t\v\t\u\s\t\g\u\5\a\a\o\x\i\e\x\4\b\3\5\b\9\f\e\s\n\a\2\c\5\a\p\1\d\5\e\x\z\u\m\c\c\8\t\o\n\2\j\j\w\n\y\7\z\p\1\u\f\q\y\3\9\8\y\z\9\c\5\a\f\m\x\7\o\9\p\x\k\9\c\9\r\v\u\1\e\3\f\n\6\r\n\a\0\a\7\s\q\i\y\l\g\8\v\9\u\1\o\w\p\u\s\e\x\3\g\9\m\0\l\r\8\g\x\d\i\o\v\x\y\w ]] 00:07:45.638 12:27:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:45.638 12:27:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:45.898 [2024-11-19 12:27:50.938188] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:45.898 [2024-11-19 12:27:50.938297] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73393 ] 00:07:45.898 [2024-11-19 12:27:51.075922] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.898 [2024-11-19 12:27:51.108694] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.898 [2024-11-19 12:27:51.135056] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:45.898  [2024-11-19T12:27:51.417Z] Copying: 512/512 [B] (average 250 kBps) 00:07:46.157 00:07:46.157 12:27:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ u8pytkwcc15vsf6mfecay89sb3nexbybsu1b3jms31cyonryu9naneld5r8lc8ibpo960xc9h5cwx2ht585gfol0ndofs4uji4vhi21czsaccfv3zyxd5bqs61glahk4jqg1gb5xcdcp76h3u8tk3g7s7aagnjpg9mhxskb1ebhkhqkxk5nh8s9s0ucna0eppckhl8p29wr4wxlb7vxn4ojydyvzu08clx6axvyiekaba5oo66rwuy3d7kchadtrpe0zvy9gz37rng9dixmke45bljnb46avws7z5zn6girut1jt7de5u7uee8cfar3s2qzld56g368sdd4g04lkj9y3pueqcj7o6qhee4t9cdin7nuv29dtw10f4jhbstvtustgu5aaoxiex4b35b9fesna2c5ap1d5exzumcc8ton2jjwny7zp1ufqy398yz9c5afmx7o9pxk9c9rvu1e3fn6rna0a7sqiylg8v9u1owpusex3g9m0lr8gxdiovxyw == \u\8\p\y\t\k\w\c\c\1\5\v\s\f\6\m\f\e\c\a\y\8\9\s\b\3\n\e\x\b\y\b\s\u\1\b\3\j\m\s\3\1\c\y\o\n\r\y\u\9\n\a\n\e\l\d\5\r\8\l\c\8\i\b\p\o\9\6\0\x\c\9\h\5\c\w\x\2\h\t\5\8\5\g\f\o\l\0\n\d\o\f\s\4\u\j\i\4\v\h\i\2\1\c\z\s\a\c\c\f\v\3\z\y\x\d\5\b\q\s\6\1\g\l\a\h\k\4\j\q\g\1\g\b\5\x\c\d\c\p\7\6\h\3\u\8\t\k\3\g\7\s\7\a\a\g\n\j\p\g\9\m\h\x\s\k\b\1\e\b\h\k\h\q\k\x\k\5\n\h\8\s\9\s\0\u\c\n\a\0\e\p\p\c\k\h\l\8\p\2\9\w\r\4\w\x\l\b\7\v\x\n\4\o\j\y\d\y\v\z\u\0\8\c\l\x\6\a\x\v\y\i\e\k\a\b\a\5\o\o\6\6\r\w\u\y\3\d\7\k\c\h\a\d\t\r\p\e\0\z\v\y\9\g\z\3\7\r\n\g\9\d\i\x\m\k\e\4\5\b\l\j\n\b\4\6\a\v\w\s\7\z\5\z\n\6\g\i\r\u\t\1\j\t\7\d\e\5\u\7\u\e\e\8\c\f\a\r\3\s\2\q\z\l\d\5\6\g\3\6\8\s\d\d\4\g\0\4\l\k\j\9\y\3\p\u\e\q\c\j\7\o\6\q\h\e\e\4\t\9\c\d\i\n\7\n\u\v\2\9\d\t\w\1\0\f\4\j\h\b\s\t\v\t\u\s\t\g\u\5\a\a\o\x\i\e\x\4\b\3\5\b\9\f\e\s\n\a\2\c\5\a\p\1\d\5\e\x\z\u\m\c\c\8\t\o\n\2\j\j\w\n\y\7\z\p\1\u\f\q\y\3\9\8\y\z\9\c\5\a\f\m\x\7\o\9\p\x\k\9\c\9\r\v\u\1\e\3\f\n\6\r\n\a\0\a\7\s\q\i\y\l\g\8\v\9\u\1\o\w\p\u\s\e\x\3\g\9\m\0\l\r\8\g\x\d\i\o\v\x\y\w ]] 00:07:46.157 12:27:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:46.157 12:27:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:46.157 [2024-11-19 12:27:51.340802] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:46.157 [2024-11-19 12:27:51.340908] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73408 ] 00:07:46.417 [2024-11-19 12:27:51.479457] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.417 [2024-11-19 12:27:51.512566] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.417 [2024-11-19 12:27:51.543898] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:46.417  [2024-11-19T12:27:51.677Z] Copying: 512/512 [B] (average 250 kBps) 00:07:46.417 00:07:46.676 12:27:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ u8pytkwcc15vsf6mfecay89sb3nexbybsu1b3jms31cyonryu9naneld5r8lc8ibpo960xc9h5cwx2ht585gfol0ndofs4uji4vhi21czsaccfv3zyxd5bqs61glahk4jqg1gb5xcdcp76h3u8tk3g7s7aagnjpg9mhxskb1ebhkhqkxk5nh8s9s0ucna0eppckhl8p29wr4wxlb7vxn4ojydyvzu08clx6axvyiekaba5oo66rwuy3d7kchadtrpe0zvy9gz37rng9dixmke45bljnb46avws7z5zn6girut1jt7de5u7uee8cfar3s2qzld56g368sdd4g04lkj9y3pueqcj7o6qhee4t9cdin7nuv29dtw10f4jhbstvtustgu5aaoxiex4b35b9fesna2c5ap1d5exzumcc8ton2jjwny7zp1ufqy398yz9c5afmx7o9pxk9c9rvu1e3fn6rna0a7sqiylg8v9u1owpusex3g9m0lr8gxdiovxyw == \u\8\p\y\t\k\w\c\c\1\5\v\s\f\6\m\f\e\c\a\y\8\9\s\b\3\n\e\x\b\y\b\s\u\1\b\3\j\m\s\3\1\c\y\o\n\r\y\u\9\n\a\n\e\l\d\5\r\8\l\c\8\i\b\p\o\9\6\0\x\c\9\h\5\c\w\x\2\h\t\5\8\5\g\f\o\l\0\n\d\o\f\s\4\u\j\i\4\v\h\i\2\1\c\z\s\a\c\c\f\v\3\z\y\x\d\5\b\q\s\6\1\g\l\a\h\k\4\j\q\g\1\g\b\5\x\c\d\c\p\7\6\h\3\u\8\t\k\3\g\7\s\7\a\a\g\n\j\p\g\9\m\h\x\s\k\b\1\e\b\h\k\h\q\k\x\k\5\n\h\8\s\9\s\0\u\c\n\a\0\e\p\p\c\k\h\l\8\p\2\9\w\r\4\w\x\l\b\7\v\x\n\4\o\j\y\d\y\v\z\u\0\8\c\l\x\6\a\x\v\y\i\e\k\a\b\a\5\o\o\6\6\r\w\u\y\3\d\7\k\c\h\a\d\t\r\p\e\0\z\v\y\9\g\z\3\7\r\n\g\9\d\i\x\m\k\e\4\5\b\l\j\n\b\4\6\a\v\w\s\7\z\5\z\n\6\g\i\r\u\t\1\j\t\7\d\e\5\u\7\u\e\e\8\c\f\a\r\3\s\2\q\z\l\d\5\6\g\3\6\8\s\d\d\4\g\0\4\l\k\j\9\y\3\p\u\e\q\c\j\7\o\6\q\h\e\e\4\t\9\c\d\i\n\7\n\u\v\2\9\d\t\w\1\0\f\4\j\h\b\s\t\v\t\u\s\t\g\u\5\a\a\o\x\i\e\x\4\b\3\5\b\9\f\e\s\n\a\2\c\5\a\p\1\d\5\e\x\z\u\m\c\c\8\t\o\n\2\j\j\w\n\y\7\z\p\1\u\f\q\y\3\9\8\y\z\9\c\5\a\f\m\x\7\o\9\p\x\k\9\c\9\r\v\u\1\e\3\f\n\6\r\n\a\0\a\7\s\q\i\y\l\g\8\v\9\u\1\o\w\p\u\s\e\x\3\g\9\m\0\l\r\8\g\x\d\i\o\v\x\y\w ]] 00:07:46.676 00:07:46.676 real 0m3.158s 00:07:46.676 user 0m1.570s 00:07:46.676 sys 0m1.311s 00:07:46.676 12:27:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:46.676 ************************************ 00:07:46.676 END TEST dd_flags_misc 00:07:46.676 12:27:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:46.676 ************************************ 00:07:46.677 12:27:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:07:46.677 12:27:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:07:46.677 * Second test run, disabling liburing, forcing AIO 00:07:46.677 12:27:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:07:46.677 12:27:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:07:46.677 12:27:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:46.677 12:27:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:46.677 12:27:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:46.677 ************************************ 00:07:46.677 START TEST dd_flag_append_forced_aio 00:07:46.677 ************************************ 00:07:46.677 12:27:51 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1125 -- # append 00:07:46.677 12:27:51 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:07:46.677 12:27:51 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:07:46.677 12:27:51 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:07:46.677 12:27:51 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:46.677 12:27:51 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:46.677 12:27:51 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=4ap9kltlzox1o4n23d7usi4uqpkxc2xe 00:07:46.677 12:27:51 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:07:46.677 12:27:51 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:46.677 12:27:51 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:46.677 12:27:51 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=ir62jncx3tfj5zuoaiv78iz2bxwpxgzz 00:07:46.677 12:27:51 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s 4ap9kltlzox1o4n23d7usi4uqpkxc2xe 00:07:46.677 12:27:51 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s ir62jncx3tfj5zuoaiv78iz2bxwpxgzz 00:07:46.677 12:27:51 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:46.677 [2024-11-19 12:27:51.789513] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:46.677 [2024-11-19 12:27:51.789617] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73431 ] 00:07:46.677 [2024-11-19 12:27:51.913233] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.936 [2024-11-19 12:27:51.946746] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.936 [2024-11-19 12:27:51.975630] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:46.936  [2024-11-19T12:27:52.196Z] Copying: 32/32 [B] (average 31 kBps) 00:07:46.936 00:07:46.936 12:27:52 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ ir62jncx3tfj5zuoaiv78iz2bxwpxgzz4ap9kltlzox1o4n23d7usi4uqpkxc2xe == \i\r\6\2\j\n\c\x\3\t\f\j\5\z\u\o\a\i\v\7\8\i\z\2\b\x\w\p\x\g\z\z\4\a\p\9\k\l\t\l\z\o\x\1\o\4\n\2\3\d\7\u\s\i\4\u\q\p\k\x\c\2\x\e ]] 00:07:46.936 00:07:46.936 real 0m0.411s 00:07:46.936 user 0m0.192s 00:07:46.936 sys 0m0.097s 00:07:46.936 12:27:52 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:46.936 ************************************ 00:07:46.936 END TEST dd_flag_append_forced_aio 00:07:46.936 ************************************ 00:07:46.936 12:27:52 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:46.936 12:27:52 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:07:46.936 12:27:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:46.936 12:27:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:46.936 12:27:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:47.209 ************************************ 00:07:47.209 START TEST dd_flag_directory_forced_aio 00:07:47.209 ************************************ 00:07:47.209 12:27:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1125 -- # directory 00:07:47.209 12:27:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:47.209 12:27:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:07:47.209 12:27:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:47.209 12:27:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.209 12:27:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.209 12:27:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.209 12:27:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.209 12:27:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.209 12:27:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.209 12:27:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.209 12:27:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:47.209 12:27:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:47.209 [2024-11-19 12:27:52.254742] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:47.209 [2024-11-19 12:27:52.254885] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73463 ] 00:07:47.209 [2024-11-19 12:27:52.394707] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.209 [2024-11-19 12:27:52.428275] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.209 [2024-11-19 12:27:52.455372] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:47.496 [2024-11-19 12:27:52.470959] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:47.496 [2024-11-19 12:27:52.471057] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:47.496 [2024-11-19 12:27:52.471070] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:47.496 [2024-11-19 12:27:52.529708] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:47.496 12:27:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:07:47.496 12:27:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:47.496 12:27:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:07:47.496 12:27:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:07:47.496 12:27:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:07:47.496 12:27:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:47.496 12:27:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:47.496 12:27:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:07:47.496 12:27:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:47.496 12:27:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.496 12:27:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.496 12:27:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.496 12:27:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.496 12:27:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.496 12:27:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.496 12:27:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.496 12:27:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:47.496 12:27:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:47.496 [2024-11-19 12:27:52.643196] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:47.496 [2024-11-19 12:27:52.643293] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73467 ] 00:07:47.756 [2024-11-19 12:27:52.771268] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.756 [2024-11-19 12:27:52.804804] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.756 [2024-11-19 12:27:52.830927] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:47.756 [2024-11-19 12:27:52.845209] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:47.756 [2024-11-19 12:27:52.845271] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:47.756 [2024-11-19 12:27:52.845299] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:47.756 [2024-11-19 12:27:52.899795] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:47.756 12:27:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:07:47.756 12:27:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:47.756 12:27:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:07:47.756 12:27:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:07:47.756 12:27:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:07:47.756 12:27:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:47.756 00:07:47.756 real 0m0.768s 00:07:47.756 user 0m0.377s 00:07:47.756 sys 0m0.183s 00:07:47.756 12:27:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:47.756 12:27:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:47.756 ************************************ 00:07:47.756 END TEST dd_flag_directory_forced_aio 00:07:47.756 ************************************ 00:07:47.756 12:27:52 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:07:47.756 12:27:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:47.756 12:27:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:47.756 12:27:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:47.756 ************************************ 00:07:47.756 START TEST dd_flag_nofollow_forced_aio 00:07:47.756 ************************************ 00:07:47.756 12:27:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1125 -- # nofollow 00:07:47.756 12:27:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:47.756 12:27:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:47.756 12:27:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:48.015 12:27:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:48.015 12:27:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:48.015 12:27:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:07:48.015 12:27:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:48.015 12:27:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.015 12:27:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.015 12:27:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.015 12:27:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.015 12:27:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.015 12:27:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.015 12:27:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.015 12:27:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:48.016 12:27:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:48.016 [2024-11-19 12:27:53.076463] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:48.016 [2024-11-19 12:27:53.076567] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73496 ] 00:07:48.016 [2024-11-19 12:27:53.215187] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.016 [2024-11-19 12:27:53.245892] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.016 [2024-11-19 12:27:53.272098] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:48.276 [2024-11-19 12:27:53.287091] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:48.276 [2024-11-19 12:27:53.287136] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:48.276 [2024-11-19 12:27:53.287165] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:48.276 [2024-11-19 12:27:53.341933] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:48.276 12:27:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:07:48.276 12:27:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:48.276 12:27:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:07:48.276 12:27:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:07:48.276 12:27:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:07:48.276 12:27:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:48.276 12:27:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:48.276 12:27:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:07:48.276 12:27:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:48.276 12:27:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.276 12:27:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.276 12:27:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.276 12:27:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.276 12:27:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.276 12:27:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.276 12:27:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.276 12:27:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:48.276 12:27:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:48.276 [2024-11-19 12:27:53.465990] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:48.276 [2024-11-19 12:27:53.466093] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73505 ] 00:07:48.536 [2024-11-19 12:27:53.603124] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.536 [2024-11-19 12:27:53.634146] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.536 [2024-11-19 12:27:53.660302] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:48.536 [2024-11-19 12:27:53.674569] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:48.536 [2024-11-19 12:27:53.674621] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:48.536 [2024-11-19 12:27:53.674651] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:48.536 [2024-11-19 12:27:53.729826] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:48.796 12:27:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:07:48.796 12:27:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:48.796 12:27:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:07:48.796 12:27:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:07:48.796 12:27:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:07:48.796 12:27:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:48.796 12:27:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:07:48.796 12:27:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:48.796 12:27:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:48.796 12:27:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:48.796 [2024-11-19 12:27:53.848231] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:48.796 [2024-11-19 12:27:53.848464] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73507 ] 00:07:48.796 [2024-11-19 12:27:53.971887] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.796 [2024-11-19 12:27:54.008818] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.796 [2024-11-19 12:27:54.036910] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:48.796  [2024-11-19T12:27:54.316Z] Copying: 512/512 [B] (average 500 kBps) 00:07:49.056 00:07:49.056 12:27:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ 3av0z1slhxkxutbqr0lwhb7itzs0vo8cb2geqq6r3bo0g6zknalwvoktnw0ot9iyd4bdpft5qju8kbmfvrjdfu3qq9cuyrrq68i2v97352ek559sqn4lf66087lbrwlnuun52tmr6c1uh83vg2oed2unmc0gfqm2ardccxovnx5x0l33iw281f6j7k2y68525gzq2t9oxa1wibzt81nkh0t0jkpk4x43v9sutvexpp5kz3xiy28uz7fv2ybnq59cc05skqei8jw8ylnlvt2m6vx7ut60du8mvp8mdyf5xc9a8o4vy2xzmik24genkt65b5nxhfokdrxcn4jfb0d22ij394qw1n40fgal21wq9of2maqgdoclpk0y3j1zh5b3fs0b06wfcrcylsnux8q30m4u3gdoeq4lyh4bt5jodg3bcn1zy4yntiu6jcfgfuvgtri67t6fwiocel0x6nhp496qify8937by810fkd3gnoe8w7ixnav98ot905xst59 == \3\a\v\0\z\1\s\l\h\x\k\x\u\t\b\q\r\0\l\w\h\b\7\i\t\z\s\0\v\o\8\c\b\2\g\e\q\q\6\r\3\b\o\0\g\6\z\k\n\a\l\w\v\o\k\t\n\w\0\o\t\9\i\y\d\4\b\d\p\f\t\5\q\j\u\8\k\b\m\f\v\r\j\d\f\u\3\q\q\9\c\u\y\r\r\q\6\8\i\2\v\9\7\3\5\2\e\k\5\5\9\s\q\n\4\l\f\6\6\0\8\7\l\b\r\w\l\n\u\u\n\5\2\t\m\r\6\c\1\u\h\8\3\v\g\2\o\e\d\2\u\n\m\c\0\g\f\q\m\2\a\r\d\c\c\x\o\v\n\x\5\x\0\l\3\3\i\w\2\8\1\f\6\j\7\k\2\y\6\8\5\2\5\g\z\q\2\t\9\o\x\a\1\w\i\b\z\t\8\1\n\k\h\0\t\0\j\k\p\k\4\x\4\3\v\9\s\u\t\v\e\x\p\p\5\k\z\3\x\i\y\2\8\u\z\7\f\v\2\y\b\n\q\5\9\c\c\0\5\s\k\q\e\i\8\j\w\8\y\l\n\l\v\t\2\m\6\v\x\7\u\t\6\0\d\u\8\m\v\p\8\m\d\y\f\5\x\c\9\a\8\o\4\v\y\2\x\z\m\i\k\2\4\g\e\n\k\t\6\5\b\5\n\x\h\f\o\k\d\r\x\c\n\4\j\f\b\0\d\2\2\i\j\3\9\4\q\w\1\n\4\0\f\g\a\l\2\1\w\q\9\o\f\2\m\a\q\g\d\o\c\l\p\k\0\y\3\j\1\z\h\5\b\3\f\s\0\b\0\6\w\f\c\r\c\y\l\s\n\u\x\8\q\3\0\m\4\u\3\g\d\o\e\q\4\l\y\h\4\b\t\5\j\o\d\g\3\b\c\n\1\z\y\4\y\n\t\i\u\6\j\c\f\g\f\u\v\g\t\r\i\6\7\t\6\f\w\i\o\c\e\l\0\x\6\n\h\p\4\9\6\q\i\f\y\8\9\3\7\b\y\8\1\0\f\k\d\3\g\n\o\e\8\w\7\i\x\n\a\v\9\8\o\t\9\0\5\x\s\t\5\9 ]] 00:07:49.056 00:07:49.056 real 0m1.219s 00:07:49.056 user 0m0.585s 00:07:49.056 sys 0m0.292s 00:07:49.056 12:27:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:49.056 ************************************ 00:07:49.056 END TEST dd_flag_nofollow_forced_aio 00:07:49.056 ************************************ 00:07:49.056 12:27:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:49.056 12:27:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:07:49.056 12:27:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:49.056 12:27:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:49.056 12:27:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:49.056 ************************************ 00:07:49.056 START TEST dd_flag_noatime_forced_aio 00:07:49.056 ************************************ 00:07:49.056 12:27:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1125 -- # noatime 00:07:49.056 12:27:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:07:49.056 12:27:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:07:49.056 12:27:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:07:49.056 12:27:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:49.056 12:27:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:49.056 12:27:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:49.056 12:27:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1732019274 00:07:49.056 12:27:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:49.056 12:27:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1732019274 00:07:49.056 12:27:54 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:07:50.435 12:27:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:50.435 [2024-11-19 12:27:55.360552] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:50.435 [2024-11-19 12:27:55.360645] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73553 ] 00:07:50.435 [2024-11-19 12:27:55.500848] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.435 [2024-11-19 12:27:55.542003] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.435 [2024-11-19 12:27:55.574296] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:50.435  [2024-11-19T12:27:55.954Z] Copying: 512/512 [B] (average 500 kBps) 00:07:50.694 00:07:50.694 12:27:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:50.694 12:27:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1732019274 )) 00:07:50.694 12:27:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:50.694 12:27:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1732019274 )) 00:07:50.694 12:27:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:50.694 [2024-11-19 12:27:55.849415] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:50.694 [2024-11-19 12:27:55.849550] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73559 ] 00:07:50.954 [2024-11-19 12:27:55.998169] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.954 [2024-11-19 12:27:56.039003] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.954 [2024-11-19 12:27:56.072093] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:50.954  [2024-11-19T12:27:56.473Z] Copying: 512/512 [B] (average 500 kBps) 00:07:51.213 00:07:51.213 12:27:56 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:51.213 12:27:56 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1732019276 )) 00:07:51.213 00:07:51.213 real 0m1.957s 00:07:51.213 user 0m0.483s 00:07:51.213 sys 0m0.231s 00:07:51.213 ************************************ 00:07:51.213 12:27:56 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:51.213 12:27:56 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:51.213 END TEST dd_flag_noatime_forced_aio 00:07:51.213 ************************************ 00:07:51.213 12:27:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:07:51.213 12:27:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:51.214 12:27:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:51.214 12:27:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:51.214 ************************************ 00:07:51.214 START TEST dd_flags_misc_forced_aio 00:07:51.214 ************************************ 00:07:51.214 12:27:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1125 -- # io 00:07:51.214 12:27:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:51.214 12:27:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:51.214 12:27:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:51.214 12:27:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:51.214 12:27:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:51.214 12:27:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:51.214 12:27:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:51.214 12:27:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:51.214 12:27:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:51.214 [2024-11-19 12:27:56.355908] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:51.214 [2024-11-19 12:27:56.356002] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73591 ] 00:07:51.473 [2024-11-19 12:27:56.493725] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.474 [2024-11-19 12:27:56.526831] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.474 [2024-11-19 12:27:56.554050] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:51.474  [2024-11-19T12:27:56.734Z] Copying: 512/512 [B] (average 500 kBps) 00:07:51.474 00:07:51.474 12:27:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ hl0p4b9yvl49s0y3zrywkq1yan999jz6f6vrqhrhqbvtsji5hhr2ql5p8hw3ikv616z8wpjtpzoup7nwmu9wq22ryhh0zgwwpd9ktzxxubolr4yw9qhnmw0iqfmebie660kkrdcff34hamgyqlkvraz7girxptmtehwam63s80igqvk7htzpia2mjjwtoh7zq9js9tuevyio09wkdsuls1icyiok1c1b6l41hohc1x99zcx08vx9n9hsms713theg6andpq3iribbi51i7jscd9giy1f9qbojqeh8focq78kehupnfw7gwzuxsum4xwc01tjtmz37rx6dr3s4bky78a3ifk9d5tupy42ffkrei190fg3t1r5ju1lt59wbz1e0ok3argvhywm6lw6jw8e4me2il8u3m0oyy34otyxel41p4bel6xmfcx58wrg8zapv93uqsvdwtmmzxuihy9ajmzgiki3gdsngwplbumpkldmzfaimm15p2465quz8w7i == \h\l\0\p\4\b\9\y\v\l\4\9\s\0\y\3\z\r\y\w\k\q\1\y\a\n\9\9\9\j\z\6\f\6\v\r\q\h\r\h\q\b\v\t\s\j\i\5\h\h\r\2\q\l\5\p\8\h\w\3\i\k\v\6\1\6\z\8\w\p\j\t\p\z\o\u\p\7\n\w\m\u\9\w\q\2\2\r\y\h\h\0\z\g\w\w\p\d\9\k\t\z\x\x\u\b\o\l\r\4\y\w\9\q\h\n\m\w\0\i\q\f\m\e\b\i\e\6\6\0\k\k\r\d\c\f\f\3\4\h\a\m\g\y\q\l\k\v\r\a\z\7\g\i\r\x\p\t\m\t\e\h\w\a\m\6\3\s\8\0\i\g\q\v\k\7\h\t\z\p\i\a\2\m\j\j\w\t\o\h\7\z\q\9\j\s\9\t\u\e\v\y\i\o\0\9\w\k\d\s\u\l\s\1\i\c\y\i\o\k\1\c\1\b\6\l\4\1\h\o\h\c\1\x\9\9\z\c\x\0\8\v\x\9\n\9\h\s\m\s\7\1\3\t\h\e\g\6\a\n\d\p\q\3\i\r\i\b\b\i\5\1\i\7\j\s\c\d\9\g\i\y\1\f\9\q\b\o\j\q\e\h\8\f\o\c\q\7\8\k\e\h\u\p\n\f\w\7\g\w\z\u\x\s\u\m\4\x\w\c\0\1\t\j\t\m\z\3\7\r\x\6\d\r\3\s\4\b\k\y\7\8\a\3\i\f\k\9\d\5\t\u\p\y\4\2\f\f\k\r\e\i\1\9\0\f\g\3\t\1\r\5\j\u\1\l\t\5\9\w\b\z\1\e\0\o\k\3\a\r\g\v\h\y\w\m\6\l\w\6\j\w\8\e\4\m\e\2\i\l\8\u\3\m\0\o\y\y\3\4\o\t\y\x\e\l\4\1\p\4\b\e\l\6\x\m\f\c\x\5\8\w\r\g\8\z\a\p\v\9\3\u\q\s\v\d\w\t\m\m\z\x\u\i\h\y\9\a\j\m\z\g\i\k\i\3\g\d\s\n\g\w\p\l\b\u\m\p\k\l\d\m\z\f\a\i\m\m\1\5\p\2\4\6\5\q\u\z\8\w\7\i ]] 00:07:51.474 12:27:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:51.474 12:27:56 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:51.733 [2024-11-19 12:27:56.777483] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:51.733 [2024-11-19 12:27:56.777771] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73593 ] 00:07:51.733 [2024-11-19 12:27:56.914123] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.733 [2024-11-19 12:27:56.944939] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.733 [2024-11-19 12:27:56.970873] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:51.733  [2024-11-19T12:27:57.253Z] Copying: 512/512 [B] (average 500 kBps) 00:07:51.993 00:07:51.993 12:27:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ hl0p4b9yvl49s0y3zrywkq1yan999jz6f6vrqhrhqbvtsji5hhr2ql5p8hw3ikv616z8wpjtpzoup7nwmu9wq22ryhh0zgwwpd9ktzxxubolr4yw9qhnmw0iqfmebie660kkrdcff34hamgyqlkvraz7girxptmtehwam63s80igqvk7htzpia2mjjwtoh7zq9js9tuevyio09wkdsuls1icyiok1c1b6l41hohc1x99zcx08vx9n9hsms713theg6andpq3iribbi51i7jscd9giy1f9qbojqeh8focq78kehupnfw7gwzuxsum4xwc01tjtmz37rx6dr3s4bky78a3ifk9d5tupy42ffkrei190fg3t1r5ju1lt59wbz1e0ok3argvhywm6lw6jw8e4me2il8u3m0oyy34otyxel41p4bel6xmfcx58wrg8zapv93uqsvdwtmmzxuihy9ajmzgiki3gdsngwplbumpkldmzfaimm15p2465quz8w7i == \h\l\0\p\4\b\9\y\v\l\4\9\s\0\y\3\z\r\y\w\k\q\1\y\a\n\9\9\9\j\z\6\f\6\v\r\q\h\r\h\q\b\v\t\s\j\i\5\h\h\r\2\q\l\5\p\8\h\w\3\i\k\v\6\1\6\z\8\w\p\j\t\p\z\o\u\p\7\n\w\m\u\9\w\q\2\2\r\y\h\h\0\z\g\w\w\p\d\9\k\t\z\x\x\u\b\o\l\r\4\y\w\9\q\h\n\m\w\0\i\q\f\m\e\b\i\e\6\6\0\k\k\r\d\c\f\f\3\4\h\a\m\g\y\q\l\k\v\r\a\z\7\g\i\r\x\p\t\m\t\e\h\w\a\m\6\3\s\8\0\i\g\q\v\k\7\h\t\z\p\i\a\2\m\j\j\w\t\o\h\7\z\q\9\j\s\9\t\u\e\v\y\i\o\0\9\w\k\d\s\u\l\s\1\i\c\y\i\o\k\1\c\1\b\6\l\4\1\h\o\h\c\1\x\9\9\z\c\x\0\8\v\x\9\n\9\h\s\m\s\7\1\3\t\h\e\g\6\a\n\d\p\q\3\i\r\i\b\b\i\5\1\i\7\j\s\c\d\9\g\i\y\1\f\9\q\b\o\j\q\e\h\8\f\o\c\q\7\8\k\e\h\u\p\n\f\w\7\g\w\z\u\x\s\u\m\4\x\w\c\0\1\t\j\t\m\z\3\7\r\x\6\d\r\3\s\4\b\k\y\7\8\a\3\i\f\k\9\d\5\t\u\p\y\4\2\f\f\k\r\e\i\1\9\0\f\g\3\t\1\r\5\j\u\1\l\t\5\9\w\b\z\1\e\0\o\k\3\a\r\g\v\h\y\w\m\6\l\w\6\j\w\8\e\4\m\e\2\i\l\8\u\3\m\0\o\y\y\3\4\o\t\y\x\e\l\4\1\p\4\b\e\l\6\x\m\f\c\x\5\8\w\r\g\8\z\a\p\v\9\3\u\q\s\v\d\w\t\m\m\z\x\u\i\h\y\9\a\j\m\z\g\i\k\i\3\g\d\s\n\g\w\p\l\b\u\m\p\k\l\d\m\z\f\a\i\m\m\1\5\p\2\4\6\5\q\u\z\8\w\7\i ]] 00:07:51.993 12:27:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:51.993 12:27:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:51.993 [2024-11-19 12:27:57.199340] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:51.993 [2024-11-19 12:27:57.199435] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73601 ] 00:07:52.252 [2024-11-19 12:27:57.340699] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.252 [2024-11-19 12:27:57.373440] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.252 [2024-11-19 12:27:57.402004] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:52.252  [2024-11-19T12:27:57.771Z] Copying: 512/512 [B] (average 250 kBps) 00:07:52.511 00:07:52.512 12:27:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ hl0p4b9yvl49s0y3zrywkq1yan999jz6f6vrqhrhqbvtsji5hhr2ql5p8hw3ikv616z8wpjtpzoup7nwmu9wq22ryhh0zgwwpd9ktzxxubolr4yw9qhnmw0iqfmebie660kkrdcff34hamgyqlkvraz7girxptmtehwam63s80igqvk7htzpia2mjjwtoh7zq9js9tuevyio09wkdsuls1icyiok1c1b6l41hohc1x99zcx08vx9n9hsms713theg6andpq3iribbi51i7jscd9giy1f9qbojqeh8focq78kehupnfw7gwzuxsum4xwc01tjtmz37rx6dr3s4bky78a3ifk9d5tupy42ffkrei190fg3t1r5ju1lt59wbz1e0ok3argvhywm6lw6jw8e4me2il8u3m0oyy34otyxel41p4bel6xmfcx58wrg8zapv93uqsvdwtmmzxuihy9ajmzgiki3gdsngwplbumpkldmzfaimm15p2465quz8w7i == \h\l\0\p\4\b\9\y\v\l\4\9\s\0\y\3\z\r\y\w\k\q\1\y\a\n\9\9\9\j\z\6\f\6\v\r\q\h\r\h\q\b\v\t\s\j\i\5\h\h\r\2\q\l\5\p\8\h\w\3\i\k\v\6\1\6\z\8\w\p\j\t\p\z\o\u\p\7\n\w\m\u\9\w\q\2\2\r\y\h\h\0\z\g\w\w\p\d\9\k\t\z\x\x\u\b\o\l\r\4\y\w\9\q\h\n\m\w\0\i\q\f\m\e\b\i\e\6\6\0\k\k\r\d\c\f\f\3\4\h\a\m\g\y\q\l\k\v\r\a\z\7\g\i\r\x\p\t\m\t\e\h\w\a\m\6\3\s\8\0\i\g\q\v\k\7\h\t\z\p\i\a\2\m\j\j\w\t\o\h\7\z\q\9\j\s\9\t\u\e\v\y\i\o\0\9\w\k\d\s\u\l\s\1\i\c\y\i\o\k\1\c\1\b\6\l\4\1\h\o\h\c\1\x\9\9\z\c\x\0\8\v\x\9\n\9\h\s\m\s\7\1\3\t\h\e\g\6\a\n\d\p\q\3\i\r\i\b\b\i\5\1\i\7\j\s\c\d\9\g\i\y\1\f\9\q\b\o\j\q\e\h\8\f\o\c\q\7\8\k\e\h\u\p\n\f\w\7\g\w\z\u\x\s\u\m\4\x\w\c\0\1\t\j\t\m\z\3\7\r\x\6\d\r\3\s\4\b\k\y\7\8\a\3\i\f\k\9\d\5\t\u\p\y\4\2\f\f\k\r\e\i\1\9\0\f\g\3\t\1\r\5\j\u\1\l\t\5\9\w\b\z\1\e\0\o\k\3\a\r\g\v\h\y\w\m\6\l\w\6\j\w\8\e\4\m\e\2\i\l\8\u\3\m\0\o\y\y\3\4\o\t\y\x\e\l\4\1\p\4\b\e\l\6\x\m\f\c\x\5\8\w\r\g\8\z\a\p\v\9\3\u\q\s\v\d\w\t\m\m\z\x\u\i\h\y\9\a\j\m\z\g\i\k\i\3\g\d\s\n\g\w\p\l\b\u\m\p\k\l\d\m\z\f\a\i\m\m\1\5\p\2\4\6\5\q\u\z\8\w\7\i ]] 00:07:52.512 12:27:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:52.512 12:27:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:52.512 [2024-11-19 12:27:57.627090] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:52.512 [2024-11-19 12:27:57.627187] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73608 ] 00:07:52.512 [2024-11-19 12:27:57.767127] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.771 [2024-11-19 12:27:57.801076] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.771 [2024-11-19 12:27:57.829157] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:52.771  [2024-11-19T12:27:58.031Z] Copying: 512/512 [B] (average 250 kBps) 00:07:52.771 00:07:52.771 12:27:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ hl0p4b9yvl49s0y3zrywkq1yan999jz6f6vrqhrhqbvtsji5hhr2ql5p8hw3ikv616z8wpjtpzoup7nwmu9wq22ryhh0zgwwpd9ktzxxubolr4yw9qhnmw0iqfmebie660kkrdcff34hamgyqlkvraz7girxptmtehwam63s80igqvk7htzpia2mjjwtoh7zq9js9tuevyio09wkdsuls1icyiok1c1b6l41hohc1x99zcx08vx9n9hsms713theg6andpq3iribbi51i7jscd9giy1f9qbojqeh8focq78kehupnfw7gwzuxsum4xwc01tjtmz37rx6dr3s4bky78a3ifk9d5tupy42ffkrei190fg3t1r5ju1lt59wbz1e0ok3argvhywm6lw6jw8e4me2il8u3m0oyy34otyxel41p4bel6xmfcx58wrg8zapv93uqsvdwtmmzxuihy9ajmzgiki3gdsngwplbumpkldmzfaimm15p2465quz8w7i == \h\l\0\p\4\b\9\y\v\l\4\9\s\0\y\3\z\r\y\w\k\q\1\y\a\n\9\9\9\j\z\6\f\6\v\r\q\h\r\h\q\b\v\t\s\j\i\5\h\h\r\2\q\l\5\p\8\h\w\3\i\k\v\6\1\6\z\8\w\p\j\t\p\z\o\u\p\7\n\w\m\u\9\w\q\2\2\r\y\h\h\0\z\g\w\w\p\d\9\k\t\z\x\x\u\b\o\l\r\4\y\w\9\q\h\n\m\w\0\i\q\f\m\e\b\i\e\6\6\0\k\k\r\d\c\f\f\3\4\h\a\m\g\y\q\l\k\v\r\a\z\7\g\i\r\x\p\t\m\t\e\h\w\a\m\6\3\s\8\0\i\g\q\v\k\7\h\t\z\p\i\a\2\m\j\j\w\t\o\h\7\z\q\9\j\s\9\t\u\e\v\y\i\o\0\9\w\k\d\s\u\l\s\1\i\c\y\i\o\k\1\c\1\b\6\l\4\1\h\o\h\c\1\x\9\9\z\c\x\0\8\v\x\9\n\9\h\s\m\s\7\1\3\t\h\e\g\6\a\n\d\p\q\3\i\r\i\b\b\i\5\1\i\7\j\s\c\d\9\g\i\y\1\f\9\q\b\o\j\q\e\h\8\f\o\c\q\7\8\k\e\h\u\p\n\f\w\7\g\w\z\u\x\s\u\m\4\x\w\c\0\1\t\j\t\m\z\3\7\r\x\6\d\r\3\s\4\b\k\y\7\8\a\3\i\f\k\9\d\5\t\u\p\y\4\2\f\f\k\r\e\i\1\9\0\f\g\3\t\1\r\5\j\u\1\l\t\5\9\w\b\z\1\e\0\o\k\3\a\r\g\v\h\y\w\m\6\l\w\6\j\w\8\e\4\m\e\2\i\l\8\u\3\m\0\o\y\y\3\4\o\t\y\x\e\l\4\1\p\4\b\e\l\6\x\m\f\c\x\5\8\w\r\g\8\z\a\p\v\9\3\u\q\s\v\d\w\t\m\m\z\x\u\i\h\y\9\a\j\m\z\g\i\k\i\3\g\d\s\n\g\w\p\l\b\u\m\p\k\l\d\m\z\f\a\i\m\m\1\5\p\2\4\6\5\q\u\z\8\w\7\i ]] 00:07:52.771 12:27:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:52.771 12:27:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:52.771 12:27:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:52.771 12:27:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:52.771 12:27:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:52.771 12:27:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:53.031 [2024-11-19 12:27:58.079231] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:53.031 [2024-11-19 12:27:58.079500] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73610 ] 00:07:53.031 [2024-11-19 12:27:58.219819] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.031 [2024-11-19 12:27:58.253659] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.031 [2024-11-19 12:27:58.284054] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:53.290  [2024-11-19T12:27:58.550Z] Copying: 512/512 [B] (average 500 kBps) 00:07:53.290 00:07:53.290 12:27:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ stkfsjp7rbrp0h2fv7kwnoqen0eqfr09jbpg2yikhc6xelp1iz9s5hwqs0yqb84xe8dy1sir85y442zshp3oa79rpuios3x0fw50zookle4mcg9h6efwde3daoytplgae3p03ettaaurjtsqnmyl5ve22v8atrog1d8gg10e8jc41rrp2y3oennk3sb8tb5qrnmueojlrvazkf1ek6evtebfazc5isr2eiey58xev8lv9lq6l69gp2gbmyzz4246xfb6lja9q2figwipjvq5485p5llgbbo3vm6y3re4df31yee71lt4ov5gtaaz8pjj2uedy1tqm82xyawy80li3y93fnj216iteokzfcfi5cty1p22favhc5dhy17frsl37izf58uqzrbljikr840rr88ifjit5y4h8cxakbdb4v2426qtc5fgr5bn2u69zjlme4udfuaoaumbfajp9dq5emajnyspcksx193nhu9ls6mkg672briyugho2zoizr8l == \s\t\k\f\s\j\p\7\r\b\r\p\0\h\2\f\v\7\k\w\n\o\q\e\n\0\e\q\f\r\0\9\j\b\p\g\2\y\i\k\h\c\6\x\e\l\p\1\i\z\9\s\5\h\w\q\s\0\y\q\b\8\4\x\e\8\d\y\1\s\i\r\8\5\y\4\4\2\z\s\h\p\3\o\a\7\9\r\p\u\i\o\s\3\x\0\f\w\5\0\z\o\o\k\l\e\4\m\c\g\9\h\6\e\f\w\d\e\3\d\a\o\y\t\p\l\g\a\e\3\p\0\3\e\t\t\a\a\u\r\j\t\s\q\n\m\y\l\5\v\e\2\2\v\8\a\t\r\o\g\1\d\8\g\g\1\0\e\8\j\c\4\1\r\r\p\2\y\3\o\e\n\n\k\3\s\b\8\t\b\5\q\r\n\m\u\e\o\j\l\r\v\a\z\k\f\1\e\k\6\e\v\t\e\b\f\a\z\c\5\i\s\r\2\e\i\e\y\5\8\x\e\v\8\l\v\9\l\q\6\l\6\9\g\p\2\g\b\m\y\z\z\4\2\4\6\x\f\b\6\l\j\a\9\q\2\f\i\g\w\i\p\j\v\q\5\4\8\5\p\5\l\l\g\b\b\o\3\v\m\6\y\3\r\e\4\d\f\3\1\y\e\e\7\1\l\t\4\o\v\5\g\t\a\a\z\8\p\j\j\2\u\e\d\y\1\t\q\m\8\2\x\y\a\w\y\8\0\l\i\3\y\9\3\f\n\j\2\1\6\i\t\e\o\k\z\f\c\f\i\5\c\t\y\1\p\2\2\f\a\v\h\c\5\d\h\y\1\7\f\r\s\l\3\7\i\z\f\5\8\u\q\z\r\b\l\j\i\k\r\8\4\0\r\r\8\8\i\f\j\i\t\5\y\4\h\8\c\x\a\k\b\d\b\4\v\2\4\2\6\q\t\c\5\f\g\r\5\b\n\2\u\6\9\z\j\l\m\e\4\u\d\f\u\a\o\a\u\m\b\f\a\j\p\9\d\q\5\e\m\a\j\n\y\s\p\c\k\s\x\1\9\3\n\h\u\9\l\s\6\m\k\g\6\7\2\b\r\i\y\u\g\h\o\2\z\o\i\z\r\8\l ]] 00:07:53.290 12:27:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:53.290 12:27:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:53.290 [2024-11-19 12:27:58.494004] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:53.290 [2024-11-19 12:27:58.494088] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73623 ] 00:07:53.549 [2024-11-19 12:27:58.621507] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.549 [2024-11-19 12:27:58.653237] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.549 [2024-11-19 12:27:58.683366] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:53.549  [2024-11-19T12:27:59.069Z] Copying: 512/512 [B] (average 500 kBps) 00:07:53.809 00:07:53.809 12:27:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ stkfsjp7rbrp0h2fv7kwnoqen0eqfr09jbpg2yikhc6xelp1iz9s5hwqs0yqb84xe8dy1sir85y442zshp3oa79rpuios3x0fw50zookle4mcg9h6efwde3daoytplgae3p03ettaaurjtsqnmyl5ve22v8atrog1d8gg10e8jc41rrp2y3oennk3sb8tb5qrnmueojlrvazkf1ek6evtebfazc5isr2eiey58xev8lv9lq6l69gp2gbmyzz4246xfb6lja9q2figwipjvq5485p5llgbbo3vm6y3re4df31yee71lt4ov5gtaaz8pjj2uedy1tqm82xyawy80li3y93fnj216iteokzfcfi5cty1p22favhc5dhy17frsl37izf58uqzrbljikr840rr88ifjit5y4h8cxakbdb4v2426qtc5fgr5bn2u69zjlme4udfuaoaumbfajp9dq5emajnyspcksx193nhu9ls6mkg672briyugho2zoizr8l == \s\t\k\f\s\j\p\7\r\b\r\p\0\h\2\f\v\7\k\w\n\o\q\e\n\0\e\q\f\r\0\9\j\b\p\g\2\y\i\k\h\c\6\x\e\l\p\1\i\z\9\s\5\h\w\q\s\0\y\q\b\8\4\x\e\8\d\y\1\s\i\r\8\5\y\4\4\2\z\s\h\p\3\o\a\7\9\r\p\u\i\o\s\3\x\0\f\w\5\0\z\o\o\k\l\e\4\m\c\g\9\h\6\e\f\w\d\e\3\d\a\o\y\t\p\l\g\a\e\3\p\0\3\e\t\t\a\a\u\r\j\t\s\q\n\m\y\l\5\v\e\2\2\v\8\a\t\r\o\g\1\d\8\g\g\1\0\e\8\j\c\4\1\r\r\p\2\y\3\o\e\n\n\k\3\s\b\8\t\b\5\q\r\n\m\u\e\o\j\l\r\v\a\z\k\f\1\e\k\6\e\v\t\e\b\f\a\z\c\5\i\s\r\2\e\i\e\y\5\8\x\e\v\8\l\v\9\l\q\6\l\6\9\g\p\2\g\b\m\y\z\z\4\2\4\6\x\f\b\6\l\j\a\9\q\2\f\i\g\w\i\p\j\v\q\5\4\8\5\p\5\l\l\g\b\b\o\3\v\m\6\y\3\r\e\4\d\f\3\1\y\e\e\7\1\l\t\4\o\v\5\g\t\a\a\z\8\p\j\j\2\u\e\d\y\1\t\q\m\8\2\x\y\a\w\y\8\0\l\i\3\y\9\3\f\n\j\2\1\6\i\t\e\o\k\z\f\c\f\i\5\c\t\y\1\p\2\2\f\a\v\h\c\5\d\h\y\1\7\f\r\s\l\3\7\i\z\f\5\8\u\q\z\r\b\l\j\i\k\r\8\4\0\r\r\8\8\i\f\j\i\t\5\y\4\h\8\c\x\a\k\b\d\b\4\v\2\4\2\6\q\t\c\5\f\g\r\5\b\n\2\u\6\9\z\j\l\m\e\4\u\d\f\u\a\o\a\u\m\b\f\a\j\p\9\d\q\5\e\m\a\j\n\y\s\p\c\k\s\x\1\9\3\n\h\u\9\l\s\6\m\k\g\6\7\2\b\r\i\y\u\g\h\o\2\z\o\i\z\r\8\l ]] 00:07:53.809 12:27:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:53.809 12:27:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:53.809 [2024-11-19 12:27:58.918617] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:53.809 [2024-11-19 12:27:58.918924] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73625 ] 00:07:53.809 [2024-11-19 12:27:59.057694] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.067 [2024-11-19 12:27:59.096172] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.067 [2024-11-19 12:27:59.128781] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:54.067  [2024-11-19T12:27:59.327Z] Copying: 512/512 [B] (average 125 kBps) 00:07:54.067 00:07:54.067 12:27:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ stkfsjp7rbrp0h2fv7kwnoqen0eqfr09jbpg2yikhc6xelp1iz9s5hwqs0yqb84xe8dy1sir85y442zshp3oa79rpuios3x0fw50zookle4mcg9h6efwde3daoytplgae3p03ettaaurjtsqnmyl5ve22v8atrog1d8gg10e8jc41rrp2y3oennk3sb8tb5qrnmueojlrvazkf1ek6evtebfazc5isr2eiey58xev8lv9lq6l69gp2gbmyzz4246xfb6lja9q2figwipjvq5485p5llgbbo3vm6y3re4df31yee71lt4ov5gtaaz8pjj2uedy1tqm82xyawy80li3y93fnj216iteokzfcfi5cty1p22favhc5dhy17frsl37izf58uqzrbljikr840rr88ifjit5y4h8cxakbdb4v2426qtc5fgr5bn2u69zjlme4udfuaoaumbfajp9dq5emajnyspcksx193nhu9ls6mkg672briyugho2zoizr8l == \s\t\k\f\s\j\p\7\r\b\r\p\0\h\2\f\v\7\k\w\n\o\q\e\n\0\e\q\f\r\0\9\j\b\p\g\2\y\i\k\h\c\6\x\e\l\p\1\i\z\9\s\5\h\w\q\s\0\y\q\b\8\4\x\e\8\d\y\1\s\i\r\8\5\y\4\4\2\z\s\h\p\3\o\a\7\9\r\p\u\i\o\s\3\x\0\f\w\5\0\z\o\o\k\l\e\4\m\c\g\9\h\6\e\f\w\d\e\3\d\a\o\y\t\p\l\g\a\e\3\p\0\3\e\t\t\a\a\u\r\j\t\s\q\n\m\y\l\5\v\e\2\2\v\8\a\t\r\o\g\1\d\8\g\g\1\0\e\8\j\c\4\1\r\r\p\2\y\3\o\e\n\n\k\3\s\b\8\t\b\5\q\r\n\m\u\e\o\j\l\r\v\a\z\k\f\1\e\k\6\e\v\t\e\b\f\a\z\c\5\i\s\r\2\e\i\e\y\5\8\x\e\v\8\l\v\9\l\q\6\l\6\9\g\p\2\g\b\m\y\z\z\4\2\4\6\x\f\b\6\l\j\a\9\q\2\f\i\g\w\i\p\j\v\q\5\4\8\5\p\5\l\l\g\b\b\o\3\v\m\6\y\3\r\e\4\d\f\3\1\y\e\e\7\1\l\t\4\o\v\5\g\t\a\a\z\8\p\j\j\2\u\e\d\y\1\t\q\m\8\2\x\y\a\w\y\8\0\l\i\3\y\9\3\f\n\j\2\1\6\i\t\e\o\k\z\f\c\f\i\5\c\t\y\1\p\2\2\f\a\v\h\c\5\d\h\y\1\7\f\r\s\l\3\7\i\z\f\5\8\u\q\z\r\b\l\j\i\k\r\8\4\0\r\r\8\8\i\f\j\i\t\5\y\4\h\8\c\x\a\k\b\d\b\4\v\2\4\2\6\q\t\c\5\f\g\r\5\b\n\2\u\6\9\z\j\l\m\e\4\u\d\f\u\a\o\a\u\m\b\f\a\j\p\9\d\q\5\e\m\a\j\n\y\s\p\c\k\s\x\1\9\3\n\h\u\9\l\s\6\m\k\g\6\7\2\b\r\i\y\u\g\h\o\2\z\o\i\z\r\8\l ]] 00:07:54.067 12:27:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:54.067 12:27:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:54.325 [2024-11-19 12:27:59.366546] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:54.325 [2024-11-19 12:27:59.366731] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73633 ] 00:07:54.325 [2024-11-19 12:27:59.517158] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.325 [2024-11-19 12:27:59.548229] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.326 [2024-11-19 12:27:59.574742] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:54.585  [2024-11-19T12:27:59.845Z] Copying: 512/512 [B] (average 250 kBps) 00:07:54.585 00:07:54.585 ************************************ 00:07:54.585 END TEST dd_flags_misc_forced_aio 00:07:54.585 ************************************ 00:07:54.585 12:27:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ stkfsjp7rbrp0h2fv7kwnoqen0eqfr09jbpg2yikhc6xelp1iz9s5hwqs0yqb84xe8dy1sir85y442zshp3oa79rpuios3x0fw50zookle4mcg9h6efwde3daoytplgae3p03ettaaurjtsqnmyl5ve22v8atrog1d8gg10e8jc41rrp2y3oennk3sb8tb5qrnmueojlrvazkf1ek6evtebfazc5isr2eiey58xev8lv9lq6l69gp2gbmyzz4246xfb6lja9q2figwipjvq5485p5llgbbo3vm6y3re4df31yee71lt4ov5gtaaz8pjj2uedy1tqm82xyawy80li3y93fnj216iteokzfcfi5cty1p22favhc5dhy17frsl37izf58uqzrbljikr840rr88ifjit5y4h8cxakbdb4v2426qtc5fgr5bn2u69zjlme4udfuaoaumbfajp9dq5emajnyspcksx193nhu9ls6mkg672briyugho2zoizr8l == \s\t\k\f\s\j\p\7\r\b\r\p\0\h\2\f\v\7\k\w\n\o\q\e\n\0\e\q\f\r\0\9\j\b\p\g\2\y\i\k\h\c\6\x\e\l\p\1\i\z\9\s\5\h\w\q\s\0\y\q\b\8\4\x\e\8\d\y\1\s\i\r\8\5\y\4\4\2\z\s\h\p\3\o\a\7\9\r\p\u\i\o\s\3\x\0\f\w\5\0\z\o\o\k\l\e\4\m\c\g\9\h\6\e\f\w\d\e\3\d\a\o\y\t\p\l\g\a\e\3\p\0\3\e\t\t\a\a\u\r\j\t\s\q\n\m\y\l\5\v\e\2\2\v\8\a\t\r\o\g\1\d\8\g\g\1\0\e\8\j\c\4\1\r\r\p\2\y\3\o\e\n\n\k\3\s\b\8\t\b\5\q\r\n\m\u\e\o\j\l\r\v\a\z\k\f\1\e\k\6\e\v\t\e\b\f\a\z\c\5\i\s\r\2\e\i\e\y\5\8\x\e\v\8\l\v\9\l\q\6\l\6\9\g\p\2\g\b\m\y\z\z\4\2\4\6\x\f\b\6\l\j\a\9\q\2\f\i\g\w\i\p\j\v\q\5\4\8\5\p\5\l\l\g\b\b\o\3\v\m\6\y\3\r\e\4\d\f\3\1\y\e\e\7\1\l\t\4\o\v\5\g\t\a\a\z\8\p\j\j\2\u\e\d\y\1\t\q\m\8\2\x\y\a\w\y\8\0\l\i\3\y\9\3\f\n\j\2\1\6\i\t\e\o\k\z\f\c\f\i\5\c\t\y\1\p\2\2\f\a\v\h\c\5\d\h\y\1\7\f\r\s\l\3\7\i\z\f\5\8\u\q\z\r\b\l\j\i\k\r\8\4\0\r\r\8\8\i\f\j\i\t\5\y\4\h\8\c\x\a\k\b\d\b\4\v\2\4\2\6\q\t\c\5\f\g\r\5\b\n\2\u\6\9\z\j\l\m\e\4\u\d\f\u\a\o\a\u\m\b\f\a\j\p\9\d\q\5\e\m\a\j\n\y\s\p\c\k\s\x\1\9\3\n\h\u\9\l\s\6\m\k\g\6\7\2\b\r\i\y\u\g\h\o\2\z\o\i\z\r\8\l ]] 00:07:54.585 00:07:54.585 real 0m3.439s 00:07:54.585 user 0m1.675s 00:07:54.585 sys 0m0.787s 00:07:54.585 12:27:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.585 12:27:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:54.585 12:27:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:07:54.585 12:27:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:54.585 12:27:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:54.585 ************************************ 00:07:54.585 END TEST spdk_dd_posix 00:07:54.585 ************************************ 00:07:54.585 00:07:54.585 real 0m15.867s 00:07:54.585 user 0m6.733s 00:07:54.585 sys 0m4.345s 00:07:54.585 12:27:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.585 12:27:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:54.585 12:27:59 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:54.585 12:27:59 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:54.585 12:27:59 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.585 12:27:59 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:54.585 ************************************ 00:07:54.585 START TEST spdk_dd_malloc 00:07:54.585 ************************************ 00:07:54.585 12:27:59 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:54.845 * Looking for test storage... 00:07:54.845 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:54.845 12:27:59 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:54.845 12:27:59 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1681 -- # lcov --version 00:07:54.845 12:27:59 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:54.845 12:27:59 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:54.845 12:27:59 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:54.845 12:27:59 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:54.845 12:27:59 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:54.845 12:27:59 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:07:54.845 12:27:59 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:07:54.845 12:27:59 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:07:54.845 12:27:59 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:07:54.845 12:27:59 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:07:54.845 12:27:59 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:07:54.845 12:27:59 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:07:54.845 12:27:59 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:54.845 12:27:59 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:07:54.845 12:27:59 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:07:54.845 12:27:59 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:54.845 12:27:59 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:54.846 12:27:59 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:07:54.846 12:27:59 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:07:54.846 12:27:59 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:54.846 12:27:59 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:07:54.846 12:27:59 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:54.846 12:27:59 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:07:54.846 12:27:59 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:07:54.846 12:27:59 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:54.846 12:27:59 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:07:54.846 12:28:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:54.846 12:28:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:54.846 12:28:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:54.846 12:28:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:07:54.846 12:28:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:54.846 12:28:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:54.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.846 --rc genhtml_branch_coverage=1 00:07:54.846 --rc genhtml_function_coverage=1 00:07:54.846 --rc genhtml_legend=1 00:07:54.846 --rc geninfo_all_blocks=1 00:07:54.846 --rc geninfo_unexecuted_blocks=1 00:07:54.846 00:07:54.846 ' 00:07:54.846 12:28:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:54.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.846 --rc genhtml_branch_coverage=1 00:07:54.846 --rc genhtml_function_coverage=1 00:07:54.846 --rc genhtml_legend=1 00:07:54.846 --rc geninfo_all_blocks=1 00:07:54.846 --rc geninfo_unexecuted_blocks=1 00:07:54.846 00:07:54.846 ' 00:07:54.846 12:28:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:54.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.846 --rc genhtml_branch_coverage=1 00:07:54.846 --rc genhtml_function_coverage=1 00:07:54.846 --rc genhtml_legend=1 00:07:54.846 --rc geninfo_all_blocks=1 00:07:54.846 --rc geninfo_unexecuted_blocks=1 00:07:54.846 00:07:54.846 ' 00:07:54.846 12:28:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:54.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.846 --rc genhtml_branch_coverage=1 00:07:54.846 --rc genhtml_function_coverage=1 00:07:54.846 --rc genhtml_legend=1 00:07:54.846 --rc geninfo_all_blocks=1 00:07:54.846 --rc geninfo_unexecuted_blocks=1 00:07:54.846 00:07:54.846 ' 00:07:54.846 12:28:00 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:54.846 12:28:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:07:54.846 12:28:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:54.846 12:28:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:54.846 12:28:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:54.846 12:28:00 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.846 12:28:00 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.846 12:28:00 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.846 12:28:00 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:07:54.846 12:28:00 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.846 12:28:00 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:07:54.846 12:28:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:54.846 12:28:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.846 12:28:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:54.846 ************************************ 00:07:54.846 START TEST dd_malloc_copy 00:07:54.846 ************************************ 00:07:54.846 12:28:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1125 -- # malloc_copy 00:07:54.846 12:28:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:07:54.846 12:28:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:07:54.846 12:28:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:54.846 12:28:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:07:54.846 12:28:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:07:54.846 12:28:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:07:54.846 12:28:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:07:54.846 12:28:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:07:54.846 12:28:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:54.846 12:28:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:54.846 [2024-11-19 12:28:00.079442] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:54.846 [2024-11-19 12:28:00.079734] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73709 ] 00:07:54.846 { 00:07:54.846 "subsystems": [ 00:07:54.846 { 00:07:54.846 "subsystem": "bdev", 00:07:54.846 "config": [ 00:07:54.846 { 00:07:54.846 "params": { 00:07:54.846 "block_size": 512, 00:07:54.846 "num_blocks": 1048576, 00:07:54.846 "name": "malloc0" 00:07:54.846 }, 00:07:54.846 "method": "bdev_malloc_create" 00:07:54.846 }, 00:07:54.846 { 00:07:54.846 "params": { 00:07:54.846 "block_size": 512, 00:07:54.846 "num_blocks": 1048576, 00:07:54.846 "name": "malloc1" 00:07:54.846 }, 00:07:54.846 "method": "bdev_malloc_create" 00:07:54.846 }, 00:07:54.846 { 00:07:54.846 "method": "bdev_wait_for_examine" 00:07:54.846 } 00:07:54.846 ] 00:07:54.846 } 00:07:54.846 ] 00:07:54.846 } 00:07:55.106 [2024-11-19 12:28:00.219631] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.106 [2024-11-19 12:28:00.251228] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.106 [2024-11-19 12:28:00.278346] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:56.486  [2024-11-19T12:28:02.686Z] Copying: 230/512 [MB] (230 MBps) [2024-11-19T12:28:02.945Z] Copying: 463/512 [MB] (233 MBps) [2024-11-19T12:28:03.205Z] Copying: 512/512 [MB] (average 231 MBps) 00:07:57.945 00:07:57.945 12:28:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:07:57.945 12:28:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:07:57.945 12:28:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:57.945 12:28:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:57.945 [2024-11-19 12:28:03.072042] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:57.945 [2024-11-19 12:28:03.072142] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73751 ] 00:07:57.945 { 00:07:57.945 "subsystems": [ 00:07:57.945 { 00:07:57.945 "subsystem": "bdev", 00:07:57.945 "config": [ 00:07:57.945 { 00:07:57.945 "params": { 00:07:57.945 "block_size": 512, 00:07:57.945 "num_blocks": 1048576, 00:07:57.945 "name": "malloc0" 00:07:57.945 }, 00:07:57.945 "method": "bdev_malloc_create" 00:07:57.945 }, 00:07:57.945 { 00:07:57.945 "params": { 00:07:57.945 "block_size": 512, 00:07:57.945 "num_blocks": 1048576, 00:07:57.945 "name": "malloc1" 00:07:57.945 }, 00:07:57.945 "method": "bdev_malloc_create" 00:07:57.945 }, 00:07:57.945 { 00:07:57.945 "method": "bdev_wait_for_examine" 00:07:57.945 } 00:07:57.945 ] 00:07:57.945 } 00:07:57.945 ] 00:07:57.945 } 00:07:58.206 [2024-11-19 12:28:03.212631] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.206 [2024-11-19 12:28:03.249987] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.206 [2024-11-19 12:28:03.278748] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:59.584  [2024-11-19T12:28:05.782Z] Copying: 232/512 [MB] (232 MBps) [2024-11-19T12:28:05.782Z] Copying: 468/512 [MB] (235 MBps) [2024-11-19T12:28:06.042Z] Copying: 512/512 [MB] (average 234 MBps) 00:08:00.782 00:08:00.782 00:08:00.782 real 0m5.948s 00:08:00.782 user 0m5.308s 00:08:00.782 sys 0m0.486s 00:08:00.782 12:28:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.782 ************************************ 00:08:00.782 END TEST dd_malloc_copy 00:08:00.782 ************************************ 00:08:00.782 12:28:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:00.782 ************************************ 00:08:00.782 END TEST spdk_dd_malloc 00:08:00.782 ************************************ 00:08:00.782 00:08:00.782 real 0m6.187s 00:08:00.782 user 0m5.442s 00:08:00.782 sys 0m0.594s 00:08:00.782 12:28:06 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.782 12:28:06 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:01.042 12:28:06 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:01.042 12:28:06 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:01.042 12:28:06 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:01.042 12:28:06 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:01.042 ************************************ 00:08:01.042 START TEST spdk_dd_bdev_to_bdev 00:08:01.042 ************************************ 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:01.042 * Looking for test storage... 00:08:01.042 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1681 -- # lcov --version 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:01.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.042 --rc genhtml_branch_coverage=1 00:08:01.042 --rc genhtml_function_coverage=1 00:08:01.042 --rc genhtml_legend=1 00:08:01.042 --rc geninfo_all_blocks=1 00:08:01.042 --rc geninfo_unexecuted_blocks=1 00:08:01.042 00:08:01.042 ' 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:01.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.042 --rc genhtml_branch_coverage=1 00:08:01.042 --rc genhtml_function_coverage=1 00:08:01.042 --rc genhtml_legend=1 00:08:01.042 --rc geninfo_all_blocks=1 00:08:01.042 --rc geninfo_unexecuted_blocks=1 00:08:01.042 00:08:01.042 ' 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:01.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.042 --rc genhtml_branch_coverage=1 00:08:01.042 --rc genhtml_function_coverage=1 00:08:01.042 --rc genhtml_legend=1 00:08:01.042 --rc geninfo_all_blocks=1 00:08:01.042 --rc geninfo_unexecuted_blocks=1 00:08:01.042 00:08:01.042 ' 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:01.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.042 --rc genhtml_branch_coverage=1 00:08:01.042 --rc genhtml_function_coverage=1 00:08:01.042 --rc genhtml_legend=1 00:08:01.042 --rc geninfo_all_blocks=1 00:08:01.042 --rc geninfo_unexecuted_blocks=1 00:08:01.042 00:08:01.042 ' 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:08:01.042 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:08:01.043 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:01.043 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:01.043 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:08:01.043 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:08:01.043 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:01.043 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:08:01.043 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:01.043 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:01.043 ************************************ 00:08:01.043 START TEST dd_inflate_file 00:08:01.043 ************************************ 00:08:01.043 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:01.302 [2024-11-19 12:28:06.335033] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:01.302 [2024-11-19 12:28:06.335396] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73863 ] 00:08:01.302 [2024-11-19 12:28:06.475275] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.302 [2024-11-19 12:28:06.511267] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.302 [2024-11-19 12:28:06.538877] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:01.562  [2024-11-19T12:28:06.822Z] Copying: 64/64 [MB] (average 1084 MBps) 00:08:01.562 00:08:01.562 ************************************ 00:08:01.562 END TEST dd_inflate_file 00:08:01.562 ************************************ 00:08:01.562 00:08:01.562 real 0m0.479s 00:08:01.562 user 0m0.280s 00:08:01.562 sys 0m0.257s 00:08:01.562 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:01.562 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:08:01.562 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:08:01.562 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:08:01.562 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:01.562 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:01.562 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:08:01.562 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:01.562 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:01.562 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:01.562 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:01.562 ************************************ 00:08:01.562 START TEST dd_copy_to_out_bdev 00:08:01.562 ************************************ 00:08:01.821 12:28:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:01.821 { 00:08:01.821 "subsystems": [ 00:08:01.821 { 00:08:01.821 "subsystem": "bdev", 00:08:01.821 "config": [ 00:08:01.821 { 00:08:01.821 "params": { 00:08:01.821 "trtype": "pcie", 00:08:01.821 "traddr": "0000:00:10.0", 00:08:01.821 "name": "Nvme0" 00:08:01.821 }, 00:08:01.821 "method": "bdev_nvme_attach_controller" 00:08:01.821 }, 00:08:01.821 { 00:08:01.821 "params": { 00:08:01.821 "trtype": "pcie", 00:08:01.821 "traddr": "0000:00:11.0", 00:08:01.821 "name": "Nvme1" 00:08:01.821 }, 00:08:01.821 "method": "bdev_nvme_attach_controller" 00:08:01.821 }, 00:08:01.821 { 00:08:01.821 "method": "bdev_wait_for_examine" 00:08:01.821 } 00:08:01.821 ] 00:08:01.821 } 00:08:01.821 ] 00:08:01.821 } 00:08:01.821 [2024-11-19 12:28:06.876951] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:01.821 [2024-11-19 12:28:06.877282] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73897 ] 00:08:01.821 [2024-11-19 12:28:07.019721] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.821 [2024-11-19 12:28:07.054429] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.080 [2024-11-19 12:28:07.084562] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.017  [2024-11-19T12:28:08.536Z] Copying: 49/64 [MB] (49 MBps) [2024-11-19T12:28:08.795Z] Copying: 64/64 [MB] (average 49 MBps) 00:08:03.535 00:08:03.535 00:08:03.535 real 0m1.897s 00:08:03.535 user 0m1.718s 00:08:03.535 sys 0m1.528s 00:08:03.535 12:28:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:03.535 12:28:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:03.535 ************************************ 00:08:03.535 END TEST dd_copy_to_out_bdev 00:08:03.535 ************************************ 00:08:03.535 12:28:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:08:03.535 12:28:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:08:03.535 12:28:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:03.535 12:28:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:03.535 12:28:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:03.535 ************************************ 00:08:03.535 START TEST dd_offset_magic 00:08:03.535 ************************************ 00:08:03.535 12:28:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1125 -- # offset_magic 00:08:03.535 12:28:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:08:03.535 12:28:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:08:03.535 12:28:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:08:03.535 12:28:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:03.535 12:28:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:08:03.535 12:28:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:03.535 12:28:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:03.535 12:28:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:03.793 [2024-11-19 12:28:08.824118] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:03.793 [2024-11-19 12:28:08.824395] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73937 ] 00:08:03.793 { 00:08:03.793 "subsystems": [ 00:08:03.793 { 00:08:03.793 "subsystem": "bdev", 00:08:03.793 "config": [ 00:08:03.793 { 00:08:03.793 "params": { 00:08:03.793 "trtype": "pcie", 00:08:03.793 "traddr": "0000:00:10.0", 00:08:03.793 "name": "Nvme0" 00:08:03.793 }, 00:08:03.793 "method": "bdev_nvme_attach_controller" 00:08:03.793 }, 00:08:03.793 { 00:08:03.793 "params": { 00:08:03.794 "trtype": "pcie", 00:08:03.794 "traddr": "0000:00:11.0", 00:08:03.794 "name": "Nvme1" 00:08:03.794 }, 00:08:03.794 "method": "bdev_nvme_attach_controller" 00:08:03.794 }, 00:08:03.794 { 00:08:03.794 "method": "bdev_wait_for_examine" 00:08:03.794 } 00:08:03.794 ] 00:08:03.794 } 00:08:03.794 ] 00:08:03.794 } 00:08:03.794 [2024-11-19 12:28:08.960233] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.794 [2024-11-19 12:28:08.993825] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.794 [2024-11-19 12:28:09.021912] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:04.052  [2024-11-19T12:28:09.570Z] Copying: 65/65 [MB] (average 942 MBps) 00:08:04.310 00:08:04.310 12:28:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:08:04.310 12:28:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:04.310 12:28:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:04.310 12:28:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:04.310 [2024-11-19 12:28:09.488107] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:04.310 [2024-11-19 12:28:09.488485] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73957 ] 00:08:04.310 { 00:08:04.310 "subsystems": [ 00:08:04.310 { 00:08:04.310 "subsystem": "bdev", 00:08:04.311 "config": [ 00:08:04.311 { 00:08:04.311 "params": { 00:08:04.311 "trtype": "pcie", 00:08:04.311 "traddr": "0000:00:10.0", 00:08:04.311 "name": "Nvme0" 00:08:04.311 }, 00:08:04.311 "method": "bdev_nvme_attach_controller" 00:08:04.311 }, 00:08:04.311 { 00:08:04.311 "params": { 00:08:04.311 "trtype": "pcie", 00:08:04.311 "traddr": "0000:00:11.0", 00:08:04.311 "name": "Nvme1" 00:08:04.311 }, 00:08:04.311 "method": "bdev_nvme_attach_controller" 00:08:04.311 }, 00:08:04.311 { 00:08:04.311 "method": "bdev_wait_for_examine" 00:08:04.311 } 00:08:04.311 ] 00:08:04.311 } 00:08:04.311 ] 00:08:04.311 } 00:08:04.570 [2024-11-19 12:28:09.627655] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.570 [2024-11-19 12:28:09.661150] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.570 [2024-11-19 12:28:09.689396] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:04.829  [2024-11-19T12:28:10.089Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:04.829 00:08:04.829 12:28:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:04.829 12:28:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:04.829 12:28:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:04.829 12:28:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:08:04.829 12:28:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:04.829 12:28:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:04.829 12:28:09 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:04.829 [2024-11-19 12:28:10.049453] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:04.829 [2024-11-19 12:28:10.049576] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73973 ] 00:08:04.829 { 00:08:04.829 "subsystems": [ 00:08:04.829 { 00:08:04.829 "subsystem": "bdev", 00:08:04.829 "config": [ 00:08:04.829 { 00:08:04.829 "params": { 00:08:04.829 "trtype": "pcie", 00:08:04.829 "traddr": "0000:00:10.0", 00:08:04.829 "name": "Nvme0" 00:08:04.829 }, 00:08:04.829 "method": "bdev_nvme_attach_controller" 00:08:04.829 }, 00:08:04.829 { 00:08:04.829 "params": { 00:08:04.829 "trtype": "pcie", 00:08:04.829 "traddr": "0000:00:11.0", 00:08:04.829 "name": "Nvme1" 00:08:04.829 }, 00:08:04.829 "method": "bdev_nvme_attach_controller" 00:08:04.829 }, 00:08:04.829 { 00:08:04.829 "method": "bdev_wait_for_examine" 00:08:04.829 } 00:08:04.829 ] 00:08:04.829 } 00:08:04.829 ] 00:08:04.829 } 00:08:05.089 [2024-11-19 12:28:10.190008] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.089 [2024-11-19 12:28:10.224147] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.089 [2024-11-19 12:28:10.252950] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:05.348  [2024-11-19T12:28:10.868Z] Copying: 65/65 [MB] (average 1031 MBps) 00:08:05.608 00:08:05.608 12:28:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:08:05.608 12:28:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:05.608 12:28:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:05.608 12:28:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:05.608 [2024-11-19 12:28:10.719133] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:05.608 [2024-11-19 12:28:10.719815] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73988 ] 00:08:05.608 { 00:08:05.608 "subsystems": [ 00:08:05.608 { 00:08:05.608 "subsystem": "bdev", 00:08:05.608 "config": [ 00:08:05.608 { 00:08:05.608 "params": { 00:08:05.608 "trtype": "pcie", 00:08:05.608 "traddr": "0000:00:10.0", 00:08:05.608 "name": "Nvme0" 00:08:05.608 }, 00:08:05.608 "method": "bdev_nvme_attach_controller" 00:08:05.608 }, 00:08:05.608 { 00:08:05.608 "params": { 00:08:05.608 "trtype": "pcie", 00:08:05.608 "traddr": "0000:00:11.0", 00:08:05.608 "name": "Nvme1" 00:08:05.608 }, 00:08:05.608 "method": "bdev_nvme_attach_controller" 00:08:05.608 }, 00:08:05.608 { 00:08:05.608 "method": "bdev_wait_for_examine" 00:08:05.608 } 00:08:05.608 ] 00:08:05.608 } 00:08:05.608 ] 00:08:05.608 } 00:08:05.608 [2024-11-19 12:28:10.858889] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.868 [2024-11-19 12:28:10.893851] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.868 [2024-11-19 12:28:10.923763] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:05.868  [2024-11-19T12:28:11.387Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:06.127 00:08:06.127 ************************************ 00:08:06.127 END TEST dd_offset_magic 00:08:06.127 ************************************ 00:08:06.127 12:28:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:06.127 12:28:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:06.127 00:08:06.127 real 0m2.439s 00:08:06.127 user 0m1.783s 00:08:06.127 sys 0m0.648s 00:08:06.127 12:28:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:06.127 12:28:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:06.127 12:28:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:08:06.128 12:28:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:08:06.128 12:28:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:06.128 12:28:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:06.128 12:28:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:06.128 12:28:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:06.128 12:28:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:06.128 12:28:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:08:06.128 12:28:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:06.128 12:28:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:06.128 12:28:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:06.128 [2024-11-19 12:28:11.318212] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:06.128 { 00:08:06.128 "subsystems": [ 00:08:06.128 { 00:08:06.128 "subsystem": "bdev", 00:08:06.128 "config": [ 00:08:06.128 { 00:08:06.128 "params": { 00:08:06.128 "trtype": "pcie", 00:08:06.128 "traddr": "0000:00:10.0", 00:08:06.128 "name": "Nvme0" 00:08:06.128 }, 00:08:06.128 "method": "bdev_nvme_attach_controller" 00:08:06.128 }, 00:08:06.128 { 00:08:06.128 "params": { 00:08:06.128 "trtype": "pcie", 00:08:06.128 "traddr": "0000:00:11.0", 00:08:06.128 "name": "Nvme1" 00:08:06.128 }, 00:08:06.128 "method": "bdev_nvme_attach_controller" 00:08:06.128 }, 00:08:06.128 { 00:08:06.128 "method": "bdev_wait_for_examine" 00:08:06.128 } 00:08:06.128 ] 00:08:06.128 } 00:08:06.128 ] 00:08:06.128 } 00:08:06.128 [2024-11-19 12:28:11.318358] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74019 ] 00:08:06.387 [2024-11-19 12:28:11.459290] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.387 [2024-11-19 12:28:11.493187] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.387 [2024-11-19 12:28:11.521474] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:06.647  [2024-11-19T12:28:11.907Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:08:06.647 00:08:06.647 12:28:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:08:06.647 12:28:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:08:06.647 12:28:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:06.647 12:28:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:06.647 12:28:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:06.647 12:28:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:06.647 12:28:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:08:06.647 12:28:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:06.647 12:28:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:06.647 12:28:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:06.647 [2024-11-19 12:28:11.888297] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:06.647 [2024-11-19 12:28:11.888659] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74035 ] 00:08:06.647 { 00:08:06.647 "subsystems": [ 00:08:06.647 { 00:08:06.647 "subsystem": "bdev", 00:08:06.647 "config": [ 00:08:06.647 { 00:08:06.647 "params": { 00:08:06.647 "trtype": "pcie", 00:08:06.647 "traddr": "0000:00:10.0", 00:08:06.647 "name": "Nvme0" 00:08:06.647 }, 00:08:06.647 "method": "bdev_nvme_attach_controller" 00:08:06.647 }, 00:08:06.647 { 00:08:06.647 "params": { 00:08:06.647 "trtype": "pcie", 00:08:06.647 "traddr": "0000:00:11.0", 00:08:06.647 "name": "Nvme1" 00:08:06.647 }, 00:08:06.647 "method": "bdev_nvme_attach_controller" 00:08:06.647 }, 00:08:06.647 { 00:08:06.647 "method": "bdev_wait_for_examine" 00:08:06.647 } 00:08:06.647 ] 00:08:06.647 } 00:08:06.647 ] 00:08:06.647 } 00:08:06.907 [2024-11-19 12:28:12.031636] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.907 [2024-11-19 12:28:12.066106] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.907 [2024-11-19 12:28:12.094254] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:07.166  [2024-11-19T12:28:12.426Z] Copying: 5120/5120 [kB] (average 1000 MBps) 00:08:07.166 00:08:07.166 12:28:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:08:07.166 ************************************ 00:08:07.166 END TEST spdk_dd_bdev_to_bdev 00:08:07.166 ************************************ 00:08:07.166 00:08:07.166 real 0m6.353s 00:08:07.166 user 0m4.783s 00:08:07.166 sys 0m3.003s 00:08:07.166 12:28:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:07.166 12:28:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:07.434 12:28:12 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:08:07.434 12:28:12 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:07.434 12:28:12 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:07.434 12:28:12 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:07.434 12:28:12 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:07.434 ************************************ 00:08:07.434 START TEST spdk_dd_uring 00:08:07.434 ************************************ 00:08:07.434 12:28:12 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:07.434 * Looking for test storage... 00:08:07.434 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:07.434 12:28:12 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:07.434 12:28:12 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1681 -- # lcov --version 00:08:07.434 12:28:12 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:07.434 12:28:12 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:07.434 12:28:12 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:07.434 12:28:12 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:07.435 12:28:12 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:07.435 12:28:12 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:08:07.435 12:28:12 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:08:07.435 12:28:12 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:08:07.435 12:28:12 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:08:07.435 12:28:12 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:08:07.435 12:28:12 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:08:07.435 12:28:12 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:08:07.435 12:28:12 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:07.435 12:28:12 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:08:07.435 12:28:12 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:08:07.435 12:28:12 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:07.435 12:28:12 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:07.435 12:28:12 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:08:07.435 12:28:12 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:08:07.435 12:28:12 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:07.435 12:28:12 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:08:07.435 12:28:12 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:08:07.435 12:28:12 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:08:07.435 12:28:12 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:08:07.435 12:28:12 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:07.435 12:28:12 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:08:07.435 12:28:12 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:08:07.435 12:28:12 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:07.435 12:28:12 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:07.435 12:28:12 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:08:07.435 12:28:12 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:07.435 12:28:12 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:07.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.435 --rc genhtml_branch_coverage=1 00:08:07.435 --rc genhtml_function_coverage=1 00:08:07.435 --rc genhtml_legend=1 00:08:07.435 --rc geninfo_all_blocks=1 00:08:07.435 --rc geninfo_unexecuted_blocks=1 00:08:07.435 00:08:07.435 ' 00:08:07.435 12:28:12 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:07.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.435 --rc genhtml_branch_coverage=1 00:08:07.435 --rc genhtml_function_coverage=1 00:08:07.435 --rc genhtml_legend=1 00:08:07.435 --rc geninfo_all_blocks=1 00:08:07.435 --rc geninfo_unexecuted_blocks=1 00:08:07.435 00:08:07.435 ' 00:08:07.435 12:28:12 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:07.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.435 --rc genhtml_branch_coverage=1 00:08:07.435 --rc genhtml_function_coverage=1 00:08:07.435 --rc genhtml_legend=1 00:08:07.435 --rc geninfo_all_blocks=1 00:08:07.435 --rc geninfo_unexecuted_blocks=1 00:08:07.435 00:08:07.435 ' 00:08:07.435 12:28:12 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:07.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.435 --rc genhtml_branch_coverage=1 00:08:07.435 --rc genhtml_function_coverage=1 00:08:07.435 --rc genhtml_legend=1 00:08:07.435 --rc geninfo_all_blocks=1 00:08:07.435 --rc geninfo_unexecuted_blocks=1 00:08:07.435 00:08:07.435 ' 00:08:07.435 12:28:12 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:07.435 12:28:12 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:08:07.435 12:28:12 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.435 12:28:12 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.435 12:28:12 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.435 12:28:12 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.706 12:28:12 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.707 12:28:12 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.707 12:28:12 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:08:07.707 12:28:12 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.707 12:28:12 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:08:07.707 12:28:12 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:07.707 12:28:12 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:07.707 12:28:12 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:07.707 ************************************ 00:08:07.707 START TEST dd_uring_copy 00:08:07.707 ************************************ 00:08:07.707 12:28:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1125 -- # uring_zram_copy 00:08:07.707 12:28:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:08:07.707 12:28:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:08:07.707 12:28:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:08:07.707 12:28:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:07.707 12:28:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:08:07.707 12:28:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:08:07.707 12:28:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:08:07.707 12:28:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:08:07.707 12:28:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:08:07.707 12:28:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:08:07.707 12:28:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:08:07.707 12:28:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:08:07.707 12:28:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:08:07.707 12:28:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:08:07.707 12:28:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:08:07.707 12:28:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:08:07.707 12:28:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:08:07.707 12:28:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:08:07.707 12:28:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:08:07.707 12:28:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:08:07.707 12:28:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:07.707 12:28:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:08:07.707 12:28:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:08:07.707 12:28:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:08:07.707 12:28:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:07.707 12:28:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=lue4jogpjkyfhldfze4ckp8m55381pj3afabfvbbiijv1tynz2scg1yec17lt02lj0mx9i1kxotgxfzd99pyfza3m457hpubqn2h04vbf7eri86enuzop29ktdiyx4jc6i7z6alksig5a7igdlf6qphqllatpjxjqoto1qxwyerg0fu2i1ayanaz9lfm2bwh1hwx2z51497tc0fm9ptjru3ljr3xq2lfngg9z80jd20bv87umvozepg4w2ty78agh337a1ecro3kyjtk44xg02my1fa3bvyky749c6y9j1rl1890ilvvmfq3bpirsezdumi0wnt9uqqmyth79rjsrnpwmmnonatiow1opmnb2me643wh7blj78vvuxnhuhb4cq7x3hj0b54yipru3o5n2m02bzg9ampx3tcmjpqdmokcqxecftqecrda3f58r2a8grt77g8dbk15qfe4m9loj72isd2s6m63sy02ha9no5s9undhxhc8fevk0225lvd1rlt0ixmnzb3btj5j04rpxro71axxe5nivo7q3nly6zhpf593lt5cfvrogzdd51tlcoaqovnsqs1h70x7ltpqlmmzfgja4pxtznqpuhacmburlgogwwa3qz9s2i1rk8idns0ufgn1bt61yldaqtie99z21pfacgoy2b9vf4afej3i1t4ptllf2x2qti7d6ndlpms49vmaktgoejwyvxraxwqqucw43jcz1tmoidij5p84sqvsdybbty3jdxqjm5pnn7idod9qkkounp47yqna0muv1sadnu0gkmua7kx97s3phe5p8y1wontq64a38d4cqojovshrcjaa3m07oem3a2epxeohv3j79qpbk7o8jmz8cgju6ygeg2uhl2g84c4ejszjun7qemfh034fvyt17j2k5085mz9rhmh60gjgng3cxd1cilvt476m5wxxx1henvu6qim15ewco8u4yoz667hs6u7zmiebovve2l1y2ww2988xp9bnzzruikf07yva 00:08:07.707 12:28:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo lue4jogpjkyfhldfze4ckp8m55381pj3afabfvbbiijv1tynz2scg1yec17lt02lj0mx9i1kxotgxfzd99pyfza3m457hpubqn2h04vbf7eri86enuzop29ktdiyx4jc6i7z6alksig5a7igdlf6qphqllatpjxjqoto1qxwyerg0fu2i1ayanaz9lfm2bwh1hwx2z51497tc0fm9ptjru3ljr3xq2lfngg9z80jd20bv87umvozepg4w2ty78agh337a1ecro3kyjtk44xg02my1fa3bvyky749c6y9j1rl1890ilvvmfq3bpirsezdumi0wnt9uqqmyth79rjsrnpwmmnonatiow1opmnb2me643wh7blj78vvuxnhuhb4cq7x3hj0b54yipru3o5n2m02bzg9ampx3tcmjpqdmokcqxecftqecrda3f58r2a8grt77g8dbk15qfe4m9loj72isd2s6m63sy02ha9no5s9undhxhc8fevk0225lvd1rlt0ixmnzb3btj5j04rpxro71axxe5nivo7q3nly6zhpf593lt5cfvrogzdd51tlcoaqovnsqs1h70x7ltpqlmmzfgja4pxtznqpuhacmburlgogwwa3qz9s2i1rk8idns0ufgn1bt61yldaqtie99z21pfacgoy2b9vf4afej3i1t4ptllf2x2qti7d6ndlpms49vmaktgoejwyvxraxwqqucw43jcz1tmoidij5p84sqvsdybbty3jdxqjm5pnn7idod9qkkounp47yqna0muv1sadnu0gkmua7kx97s3phe5p8y1wontq64a38d4cqojovshrcjaa3m07oem3a2epxeohv3j79qpbk7o8jmz8cgju6ygeg2uhl2g84c4ejszjun7qemfh034fvyt17j2k5085mz9rhmh60gjgng3cxd1cilvt476m5wxxx1henvu6qim15ewco8u4yoz667hs6u7zmiebovve2l1y2ww2988xp9bnzzruikf07yva 00:08:07.707 12:28:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:08:07.707 [2024-11-19 12:28:12.789470] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:07.707 [2024-11-19 12:28:12.789893] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74113 ] 00:08:07.707 [2024-11-19 12:28:12.931918] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.967 [2024-11-19 12:28:12.967405] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.967 [2024-11-19 12:28:12.995506] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:08.535  [2024-11-19T12:28:13.795Z] Copying: 511/511 [MB] (average 1239 MBps) 00:08:08.535 00:08:08.535 12:28:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:08:08.535 12:28:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:08:08.535 12:28:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:08.535 12:28:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:08.794 [2024-11-19 12:28:13.821916] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:08.794 [2024-11-19 12:28:13.822038] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74129 ] 00:08:08.794 { 00:08:08.794 "subsystems": [ 00:08:08.794 { 00:08:08.794 "subsystem": "bdev", 00:08:08.794 "config": [ 00:08:08.794 { 00:08:08.794 "params": { 00:08:08.794 "block_size": 512, 00:08:08.794 "num_blocks": 1048576, 00:08:08.794 "name": "malloc0" 00:08:08.794 }, 00:08:08.794 "method": "bdev_malloc_create" 00:08:08.794 }, 00:08:08.794 { 00:08:08.794 "params": { 00:08:08.794 "filename": "/dev/zram1", 00:08:08.794 "name": "uring0" 00:08:08.794 }, 00:08:08.794 "method": "bdev_uring_create" 00:08:08.794 }, 00:08:08.794 { 00:08:08.794 "method": "bdev_wait_for_examine" 00:08:08.794 } 00:08:08.794 ] 00:08:08.794 } 00:08:08.794 ] 00:08:08.794 } 00:08:08.794 [2024-11-19 12:28:13.955660] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.794 [2024-11-19 12:28:13.993069] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.794 [2024-11-19 12:28:14.021188] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:10.173  [2024-11-19T12:28:16.371Z] Copying: 245/512 [MB] (245 MBps) [2024-11-19T12:28:16.371Z] Copying: 485/512 [MB] (240 MBps) [2024-11-19T12:28:16.630Z] Copying: 512/512 [MB] (average 241 MBps) 00:08:11.370 00:08:11.370 12:28:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:08:11.370 12:28:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:08:11.370 12:28:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:11.370 12:28:16 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:11.370 [2024-11-19 12:28:16.552723] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:11.370 [2024-11-19 12:28:16.552815] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74167 ] 00:08:11.370 { 00:08:11.370 "subsystems": [ 00:08:11.370 { 00:08:11.370 "subsystem": "bdev", 00:08:11.370 "config": [ 00:08:11.370 { 00:08:11.370 "params": { 00:08:11.370 "block_size": 512, 00:08:11.370 "num_blocks": 1048576, 00:08:11.370 "name": "malloc0" 00:08:11.370 }, 00:08:11.370 "method": "bdev_malloc_create" 00:08:11.370 }, 00:08:11.370 { 00:08:11.370 "params": { 00:08:11.370 "filename": "/dev/zram1", 00:08:11.370 "name": "uring0" 00:08:11.370 }, 00:08:11.370 "method": "bdev_uring_create" 00:08:11.370 }, 00:08:11.370 { 00:08:11.370 "method": "bdev_wait_for_examine" 00:08:11.370 } 00:08:11.370 ] 00:08:11.370 } 00:08:11.370 ] 00:08:11.370 } 00:08:11.630 [2024-11-19 12:28:16.688522] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.630 [2024-11-19 12:28:16.726523] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.630 [2024-11-19 12:28:16.759377] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:13.007  [2024-11-19T12:28:19.202Z] Copying: 182/512 [MB] (182 MBps) [2024-11-19T12:28:19.769Z] Copying: 380/512 [MB] (198 MBps) [2024-11-19T12:28:20.030Z] Copying: 512/512 [MB] (average 183 MBps) 00:08:14.770 00:08:14.770 12:28:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:08:14.770 12:28:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ lue4jogpjkyfhldfze4ckp8m55381pj3afabfvbbiijv1tynz2scg1yec17lt02lj0mx9i1kxotgxfzd99pyfza3m457hpubqn2h04vbf7eri86enuzop29ktdiyx4jc6i7z6alksig5a7igdlf6qphqllatpjxjqoto1qxwyerg0fu2i1ayanaz9lfm2bwh1hwx2z51497tc0fm9ptjru3ljr3xq2lfngg9z80jd20bv87umvozepg4w2ty78agh337a1ecro3kyjtk44xg02my1fa3bvyky749c6y9j1rl1890ilvvmfq3bpirsezdumi0wnt9uqqmyth79rjsrnpwmmnonatiow1opmnb2me643wh7blj78vvuxnhuhb4cq7x3hj0b54yipru3o5n2m02bzg9ampx3tcmjpqdmokcqxecftqecrda3f58r2a8grt77g8dbk15qfe4m9loj72isd2s6m63sy02ha9no5s9undhxhc8fevk0225lvd1rlt0ixmnzb3btj5j04rpxro71axxe5nivo7q3nly6zhpf593lt5cfvrogzdd51tlcoaqovnsqs1h70x7ltpqlmmzfgja4pxtznqpuhacmburlgogwwa3qz9s2i1rk8idns0ufgn1bt61yldaqtie99z21pfacgoy2b9vf4afej3i1t4ptllf2x2qti7d6ndlpms49vmaktgoejwyvxraxwqqucw43jcz1tmoidij5p84sqvsdybbty3jdxqjm5pnn7idod9qkkounp47yqna0muv1sadnu0gkmua7kx97s3phe5p8y1wontq64a38d4cqojovshrcjaa3m07oem3a2epxeohv3j79qpbk7o8jmz8cgju6ygeg2uhl2g84c4ejszjun7qemfh034fvyt17j2k5085mz9rhmh60gjgng3cxd1cilvt476m5wxxx1henvu6qim15ewco8u4yoz667hs6u7zmiebovve2l1y2ww2988xp9bnzzruikf07yva == \l\u\e\4\j\o\g\p\j\k\y\f\h\l\d\f\z\e\4\c\k\p\8\m\5\5\3\8\1\p\j\3\a\f\a\b\f\v\b\b\i\i\j\v\1\t\y\n\z\2\s\c\g\1\y\e\c\1\7\l\t\0\2\l\j\0\m\x\9\i\1\k\x\o\t\g\x\f\z\d\9\9\p\y\f\z\a\3\m\4\5\7\h\p\u\b\q\n\2\h\0\4\v\b\f\7\e\r\i\8\6\e\n\u\z\o\p\2\9\k\t\d\i\y\x\4\j\c\6\i\7\z\6\a\l\k\s\i\g\5\a\7\i\g\d\l\f\6\q\p\h\q\l\l\a\t\p\j\x\j\q\o\t\o\1\q\x\w\y\e\r\g\0\f\u\2\i\1\a\y\a\n\a\z\9\l\f\m\2\b\w\h\1\h\w\x\2\z\5\1\4\9\7\t\c\0\f\m\9\p\t\j\r\u\3\l\j\r\3\x\q\2\l\f\n\g\g\9\z\8\0\j\d\2\0\b\v\8\7\u\m\v\o\z\e\p\g\4\w\2\t\y\7\8\a\g\h\3\3\7\a\1\e\c\r\o\3\k\y\j\t\k\4\4\x\g\0\2\m\y\1\f\a\3\b\v\y\k\y\7\4\9\c\6\y\9\j\1\r\l\1\8\9\0\i\l\v\v\m\f\q\3\b\p\i\r\s\e\z\d\u\m\i\0\w\n\t\9\u\q\q\m\y\t\h\7\9\r\j\s\r\n\p\w\m\m\n\o\n\a\t\i\o\w\1\o\p\m\n\b\2\m\e\6\4\3\w\h\7\b\l\j\7\8\v\v\u\x\n\h\u\h\b\4\c\q\7\x\3\h\j\0\b\5\4\y\i\p\r\u\3\o\5\n\2\m\0\2\b\z\g\9\a\m\p\x\3\t\c\m\j\p\q\d\m\o\k\c\q\x\e\c\f\t\q\e\c\r\d\a\3\f\5\8\r\2\a\8\g\r\t\7\7\g\8\d\b\k\1\5\q\f\e\4\m\9\l\o\j\7\2\i\s\d\2\s\6\m\6\3\s\y\0\2\h\a\9\n\o\5\s\9\u\n\d\h\x\h\c\8\f\e\v\k\0\2\2\5\l\v\d\1\r\l\t\0\i\x\m\n\z\b\3\b\t\j\5\j\0\4\r\p\x\r\o\7\1\a\x\x\e\5\n\i\v\o\7\q\3\n\l\y\6\z\h\p\f\5\9\3\l\t\5\c\f\v\r\o\g\z\d\d\5\1\t\l\c\o\a\q\o\v\n\s\q\s\1\h\7\0\x\7\l\t\p\q\l\m\m\z\f\g\j\a\4\p\x\t\z\n\q\p\u\h\a\c\m\b\u\r\l\g\o\g\w\w\a\3\q\z\9\s\2\i\1\r\k\8\i\d\n\s\0\u\f\g\n\1\b\t\6\1\y\l\d\a\q\t\i\e\9\9\z\2\1\p\f\a\c\g\o\y\2\b\9\v\f\4\a\f\e\j\3\i\1\t\4\p\t\l\l\f\2\x\2\q\t\i\7\d\6\n\d\l\p\m\s\4\9\v\m\a\k\t\g\o\e\j\w\y\v\x\r\a\x\w\q\q\u\c\w\4\3\j\c\z\1\t\m\o\i\d\i\j\5\p\8\4\s\q\v\s\d\y\b\b\t\y\3\j\d\x\q\j\m\5\p\n\n\7\i\d\o\d\9\q\k\k\o\u\n\p\4\7\y\q\n\a\0\m\u\v\1\s\a\d\n\u\0\g\k\m\u\a\7\k\x\9\7\s\3\p\h\e\5\p\8\y\1\w\o\n\t\q\6\4\a\3\8\d\4\c\q\o\j\o\v\s\h\r\c\j\a\a\3\m\0\7\o\e\m\3\a\2\e\p\x\e\o\h\v\3\j\7\9\q\p\b\k\7\o\8\j\m\z\8\c\g\j\u\6\y\g\e\g\2\u\h\l\2\g\8\4\c\4\e\j\s\z\j\u\n\7\q\e\m\f\h\0\3\4\f\v\y\t\1\7\j\2\k\5\0\8\5\m\z\9\r\h\m\h\6\0\g\j\g\n\g\3\c\x\d\1\c\i\l\v\t\4\7\6\m\5\w\x\x\x\1\h\e\n\v\u\6\q\i\m\1\5\e\w\c\o\8\u\4\y\o\z\6\6\7\h\s\6\u\7\z\m\i\e\b\o\v\v\e\2\l\1\y\2\w\w\2\9\8\8\x\p\9\b\n\z\z\r\u\i\k\f\0\7\y\v\a ]] 00:08:14.770 12:28:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:08:14.770 12:28:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ lue4jogpjkyfhldfze4ckp8m55381pj3afabfvbbiijv1tynz2scg1yec17lt02lj0mx9i1kxotgxfzd99pyfza3m457hpubqn2h04vbf7eri86enuzop29ktdiyx4jc6i7z6alksig5a7igdlf6qphqllatpjxjqoto1qxwyerg0fu2i1ayanaz9lfm2bwh1hwx2z51497tc0fm9ptjru3ljr3xq2lfngg9z80jd20bv87umvozepg4w2ty78agh337a1ecro3kyjtk44xg02my1fa3bvyky749c6y9j1rl1890ilvvmfq3bpirsezdumi0wnt9uqqmyth79rjsrnpwmmnonatiow1opmnb2me643wh7blj78vvuxnhuhb4cq7x3hj0b54yipru3o5n2m02bzg9ampx3tcmjpqdmokcqxecftqecrda3f58r2a8grt77g8dbk15qfe4m9loj72isd2s6m63sy02ha9no5s9undhxhc8fevk0225lvd1rlt0ixmnzb3btj5j04rpxro71axxe5nivo7q3nly6zhpf593lt5cfvrogzdd51tlcoaqovnsqs1h70x7ltpqlmmzfgja4pxtznqpuhacmburlgogwwa3qz9s2i1rk8idns0ufgn1bt61yldaqtie99z21pfacgoy2b9vf4afej3i1t4ptllf2x2qti7d6ndlpms49vmaktgoejwyvxraxwqqucw43jcz1tmoidij5p84sqvsdybbty3jdxqjm5pnn7idod9qkkounp47yqna0muv1sadnu0gkmua7kx97s3phe5p8y1wontq64a38d4cqojovshrcjaa3m07oem3a2epxeohv3j79qpbk7o8jmz8cgju6ygeg2uhl2g84c4ejszjun7qemfh034fvyt17j2k5085mz9rhmh60gjgng3cxd1cilvt476m5wxxx1henvu6qim15ewco8u4yoz667hs6u7zmiebovve2l1y2ww2988xp9bnzzruikf07yva == \l\u\e\4\j\o\g\p\j\k\y\f\h\l\d\f\z\e\4\c\k\p\8\m\5\5\3\8\1\p\j\3\a\f\a\b\f\v\b\b\i\i\j\v\1\t\y\n\z\2\s\c\g\1\y\e\c\1\7\l\t\0\2\l\j\0\m\x\9\i\1\k\x\o\t\g\x\f\z\d\9\9\p\y\f\z\a\3\m\4\5\7\h\p\u\b\q\n\2\h\0\4\v\b\f\7\e\r\i\8\6\e\n\u\z\o\p\2\9\k\t\d\i\y\x\4\j\c\6\i\7\z\6\a\l\k\s\i\g\5\a\7\i\g\d\l\f\6\q\p\h\q\l\l\a\t\p\j\x\j\q\o\t\o\1\q\x\w\y\e\r\g\0\f\u\2\i\1\a\y\a\n\a\z\9\l\f\m\2\b\w\h\1\h\w\x\2\z\5\1\4\9\7\t\c\0\f\m\9\p\t\j\r\u\3\l\j\r\3\x\q\2\l\f\n\g\g\9\z\8\0\j\d\2\0\b\v\8\7\u\m\v\o\z\e\p\g\4\w\2\t\y\7\8\a\g\h\3\3\7\a\1\e\c\r\o\3\k\y\j\t\k\4\4\x\g\0\2\m\y\1\f\a\3\b\v\y\k\y\7\4\9\c\6\y\9\j\1\r\l\1\8\9\0\i\l\v\v\m\f\q\3\b\p\i\r\s\e\z\d\u\m\i\0\w\n\t\9\u\q\q\m\y\t\h\7\9\r\j\s\r\n\p\w\m\m\n\o\n\a\t\i\o\w\1\o\p\m\n\b\2\m\e\6\4\3\w\h\7\b\l\j\7\8\v\v\u\x\n\h\u\h\b\4\c\q\7\x\3\h\j\0\b\5\4\y\i\p\r\u\3\o\5\n\2\m\0\2\b\z\g\9\a\m\p\x\3\t\c\m\j\p\q\d\m\o\k\c\q\x\e\c\f\t\q\e\c\r\d\a\3\f\5\8\r\2\a\8\g\r\t\7\7\g\8\d\b\k\1\5\q\f\e\4\m\9\l\o\j\7\2\i\s\d\2\s\6\m\6\3\s\y\0\2\h\a\9\n\o\5\s\9\u\n\d\h\x\h\c\8\f\e\v\k\0\2\2\5\l\v\d\1\r\l\t\0\i\x\m\n\z\b\3\b\t\j\5\j\0\4\r\p\x\r\o\7\1\a\x\x\e\5\n\i\v\o\7\q\3\n\l\y\6\z\h\p\f\5\9\3\l\t\5\c\f\v\r\o\g\z\d\d\5\1\t\l\c\o\a\q\o\v\n\s\q\s\1\h\7\0\x\7\l\t\p\q\l\m\m\z\f\g\j\a\4\p\x\t\z\n\q\p\u\h\a\c\m\b\u\r\l\g\o\g\w\w\a\3\q\z\9\s\2\i\1\r\k\8\i\d\n\s\0\u\f\g\n\1\b\t\6\1\y\l\d\a\q\t\i\e\9\9\z\2\1\p\f\a\c\g\o\y\2\b\9\v\f\4\a\f\e\j\3\i\1\t\4\p\t\l\l\f\2\x\2\q\t\i\7\d\6\n\d\l\p\m\s\4\9\v\m\a\k\t\g\o\e\j\w\y\v\x\r\a\x\w\q\q\u\c\w\4\3\j\c\z\1\t\m\o\i\d\i\j\5\p\8\4\s\q\v\s\d\y\b\b\t\y\3\j\d\x\q\j\m\5\p\n\n\7\i\d\o\d\9\q\k\k\o\u\n\p\4\7\y\q\n\a\0\m\u\v\1\s\a\d\n\u\0\g\k\m\u\a\7\k\x\9\7\s\3\p\h\e\5\p\8\y\1\w\o\n\t\q\6\4\a\3\8\d\4\c\q\o\j\o\v\s\h\r\c\j\a\a\3\m\0\7\o\e\m\3\a\2\e\p\x\e\o\h\v\3\j\7\9\q\p\b\k\7\o\8\j\m\z\8\c\g\j\u\6\y\g\e\g\2\u\h\l\2\g\8\4\c\4\e\j\s\z\j\u\n\7\q\e\m\f\h\0\3\4\f\v\y\t\1\7\j\2\k\5\0\8\5\m\z\9\r\h\m\h\6\0\g\j\g\n\g\3\c\x\d\1\c\i\l\v\t\4\7\6\m\5\w\x\x\x\1\h\e\n\v\u\6\q\i\m\1\5\e\w\c\o\8\u\4\y\o\z\6\6\7\h\s\6\u\7\z\m\i\e\b\o\v\v\e\2\l\1\y\2\w\w\2\9\8\8\x\p\9\b\n\z\z\r\u\i\k\f\0\7\y\v\a ]] 00:08:14.770 12:28:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:15.029 12:28:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:08:15.029 12:28:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:08:15.029 12:28:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:15.029 12:28:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:15.288 [2024-11-19 12:28:20.313533] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:15.288 [2024-11-19 12:28:20.313827] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74224 ] 00:08:15.288 { 00:08:15.288 "subsystems": [ 00:08:15.288 { 00:08:15.288 "subsystem": "bdev", 00:08:15.288 "config": [ 00:08:15.288 { 00:08:15.288 "params": { 00:08:15.288 "block_size": 512, 00:08:15.288 "num_blocks": 1048576, 00:08:15.288 "name": "malloc0" 00:08:15.288 }, 00:08:15.288 "method": "bdev_malloc_create" 00:08:15.288 }, 00:08:15.288 { 00:08:15.288 "params": { 00:08:15.288 "filename": "/dev/zram1", 00:08:15.288 "name": "uring0" 00:08:15.288 }, 00:08:15.288 "method": "bdev_uring_create" 00:08:15.288 }, 00:08:15.288 { 00:08:15.288 "method": "bdev_wait_for_examine" 00:08:15.288 } 00:08:15.288 ] 00:08:15.288 } 00:08:15.288 ] 00:08:15.288 } 00:08:15.288 [2024-11-19 12:28:20.448385] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.288 [2024-11-19 12:28:20.481100] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.288 [2024-11-19 12:28:20.509185] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:16.666  [2024-11-19T12:28:22.864Z] Copying: 158/512 [MB] (158 MBps) [2024-11-19T12:28:23.803Z] Copying: 315/512 [MB] (156 MBps) [2024-11-19T12:28:23.803Z] Copying: 484/512 [MB] (169 MBps) [2024-11-19T12:28:24.063Z] Copying: 512/512 [MB] (average 161 MBps) 00:08:18.803 00:08:18.803 12:28:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:08:18.803 12:28:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:08:18.803 12:28:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:18.803 12:28:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:18.803 12:28:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:08:18.803 12:28:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:08:18.803 12:28:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:18.803 12:28:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:19.062 [2024-11-19 12:28:24.066722] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:19.062 [2024-11-19 12:28:24.067587] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74280 ] 00:08:19.062 { 00:08:19.062 "subsystems": [ 00:08:19.062 { 00:08:19.062 "subsystem": "bdev", 00:08:19.062 "config": [ 00:08:19.062 { 00:08:19.062 "params": { 00:08:19.062 "block_size": 512, 00:08:19.062 "num_blocks": 1048576, 00:08:19.062 "name": "malloc0" 00:08:19.062 }, 00:08:19.062 "method": "bdev_malloc_create" 00:08:19.062 }, 00:08:19.062 { 00:08:19.062 "params": { 00:08:19.062 "filename": "/dev/zram1", 00:08:19.062 "name": "uring0" 00:08:19.062 }, 00:08:19.062 "method": "bdev_uring_create" 00:08:19.062 }, 00:08:19.062 { 00:08:19.062 "params": { 00:08:19.062 "name": "uring0" 00:08:19.062 }, 00:08:19.062 "method": "bdev_uring_delete" 00:08:19.062 }, 00:08:19.062 { 00:08:19.062 "method": "bdev_wait_for_examine" 00:08:19.062 } 00:08:19.062 ] 00:08:19.062 } 00:08:19.062 ] 00:08:19.062 } 00:08:19.062 [2024-11-19 12:28:24.205928] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.062 [2024-11-19 12:28:24.237352] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.062 [2024-11-19 12:28:24.266016] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:19.322  [2024-11-19T12:28:24.841Z] Copying: 0/0 [B] (average 0 Bps) 00:08:19.581 00:08:19.581 12:28:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:08:19.581 12:28:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:19.581 12:28:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:08:19.581 12:28:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # local es=0 00:08:19.582 12:28:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:19.582 12:28:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:19.582 12:28:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.582 12:28:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:19.582 12:28:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.582 12:28:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.582 12:28:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.582 12:28:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.582 12:28:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.582 12:28:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:19.582 12:28:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:19.582 12:28:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:19.582 [2024-11-19 12:28:24.673858] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:19.582 [2024-11-19 12:28:24.673964] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74302 ] 00:08:19.582 { 00:08:19.582 "subsystems": [ 00:08:19.582 { 00:08:19.582 "subsystem": "bdev", 00:08:19.582 "config": [ 00:08:19.582 { 00:08:19.582 "params": { 00:08:19.582 "block_size": 512, 00:08:19.582 "num_blocks": 1048576, 00:08:19.582 "name": "malloc0" 00:08:19.582 }, 00:08:19.582 "method": "bdev_malloc_create" 00:08:19.582 }, 00:08:19.582 { 00:08:19.582 "params": { 00:08:19.582 "filename": "/dev/zram1", 00:08:19.582 "name": "uring0" 00:08:19.582 }, 00:08:19.582 "method": "bdev_uring_create" 00:08:19.582 }, 00:08:19.582 { 00:08:19.582 "params": { 00:08:19.582 "name": "uring0" 00:08:19.582 }, 00:08:19.582 "method": "bdev_uring_delete" 00:08:19.582 }, 00:08:19.582 { 00:08:19.582 "method": "bdev_wait_for_examine" 00:08:19.582 } 00:08:19.582 ] 00:08:19.582 } 00:08:19.582 ] 00:08:19.582 } 00:08:19.582 [2024-11-19 12:28:24.819025] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.841 [2024-11-19 12:28:24.852121] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.841 [2024-11-19 12:28:24.879188] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:19.841 [2024-11-19 12:28:24.991865] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:08:19.841 [2024-11-19 12:28:24.991928] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:08:19.841 [2024-11-19 12:28:24.991954] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:08:19.841 [2024-11-19 12:28:24.991963] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:20.101 [2024-11-19 12:28:25.148854] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:20.101 12:28:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # es=237 00:08:20.101 12:28:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:20.101 12:28:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@662 -- # es=109 00:08:20.101 12:28:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # case "$es" in 00:08:20.101 12:28:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@670 -- # es=1 00:08:20.101 12:28:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:20.101 12:28:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:08:20.101 12:28:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:08:20.101 12:28:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:08:20.101 12:28:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:08:20.101 12:28:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:08:20.101 12:28:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:20.360 00:08:20.360 real 0m12.806s 00:08:20.361 user 0m8.799s 00:08:20.361 sys 0m10.727s 00:08:20.361 12:28:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.361 ************************************ 00:08:20.361 12:28:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:20.361 END TEST dd_uring_copy 00:08:20.361 ************************************ 00:08:20.361 00:08:20.361 real 0m13.080s 00:08:20.361 user 0m8.950s 00:08:20.361 sys 0m10.844s 00:08:20.361 12:28:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.361 12:28:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:20.361 ************************************ 00:08:20.361 END TEST spdk_dd_uring 00:08:20.361 ************************************ 00:08:20.361 12:28:25 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:20.361 12:28:25 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:20.361 12:28:25 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.361 12:28:25 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:20.361 ************************************ 00:08:20.361 START TEST spdk_dd_sparse 00:08:20.361 ************************************ 00:08:20.361 12:28:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:20.621 * Looking for test storage... 00:08:20.621 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:20.621 12:28:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:20.621 12:28:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1681 -- # lcov --version 00:08:20.621 12:28:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:20.621 12:28:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:20.621 12:28:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:20.621 12:28:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:20.621 12:28:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:20.621 12:28:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:08:20.621 12:28:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:08:20.621 12:28:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:08:20.621 12:28:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:08:20.621 12:28:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:08:20.621 12:28:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:08:20.621 12:28:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:08:20.621 12:28:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:20.621 12:28:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:08:20.621 12:28:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:08:20.621 12:28:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:20.621 12:28:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:20.621 12:28:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:08:20.621 12:28:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:08:20.621 12:28:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:20.621 12:28:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:08:20.621 12:28:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:08:20.621 12:28:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:08:20.621 12:28:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:08:20.621 12:28:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:20.621 12:28:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:08:20.621 12:28:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:08:20.621 12:28:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:20.621 12:28:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:20.621 12:28:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:08:20.621 12:28:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:20.621 12:28:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:20.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.621 --rc genhtml_branch_coverage=1 00:08:20.621 --rc genhtml_function_coverage=1 00:08:20.621 --rc genhtml_legend=1 00:08:20.621 --rc geninfo_all_blocks=1 00:08:20.622 --rc geninfo_unexecuted_blocks=1 00:08:20.622 00:08:20.622 ' 00:08:20.622 12:28:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:20.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.622 --rc genhtml_branch_coverage=1 00:08:20.622 --rc genhtml_function_coverage=1 00:08:20.622 --rc genhtml_legend=1 00:08:20.622 --rc geninfo_all_blocks=1 00:08:20.622 --rc geninfo_unexecuted_blocks=1 00:08:20.622 00:08:20.622 ' 00:08:20.622 12:28:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:20.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.622 --rc genhtml_branch_coverage=1 00:08:20.622 --rc genhtml_function_coverage=1 00:08:20.622 --rc genhtml_legend=1 00:08:20.622 --rc geninfo_all_blocks=1 00:08:20.622 --rc geninfo_unexecuted_blocks=1 00:08:20.622 00:08:20.622 ' 00:08:20.622 12:28:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:20.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.622 --rc genhtml_branch_coverage=1 00:08:20.622 --rc genhtml_function_coverage=1 00:08:20.622 --rc genhtml_legend=1 00:08:20.622 --rc geninfo_all_blocks=1 00:08:20.622 --rc geninfo_unexecuted_blocks=1 00:08:20.622 00:08:20.622 ' 00:08:20.622 12:28:25 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:20.622 12:28:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:08:20.622 12:28:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:20.622 12:28:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:20.622 12:28:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:20.622 12:28:25 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.622 12:28:25 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.622 12:28:25 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.622 12:28:25 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:08:20.622 12:28:25 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.622 12:28:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:08:20.622 12:28:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:08:20.622 12:28:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:08:20.622 12:28:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:08:20.622 12:28:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:08:20.622 12:28:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:08:20.622 12:28:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:08:20.622 12:28:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:08:20.622 12:28:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:08:20.622 12:28:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:08:20.622 12:28:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:08:20.622 1+0 records in 00:08:20.622 1+0 records out 00:08:20.622 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00702825 s, 597 MB/s 00:08:20.622 12:28:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:08:20.622 1+0 records in 00:08:20.622 1+0 records out 00:08:20.622 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00360695 s, 1.2 GB/s 00:08:20.622 12:28:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:08:20.622 1+0 records in 00:08:20.622 1+0 records out 00:08:20.622 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00621454 s, 675 MB/s 00:08:20.622 12:28:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:08:20.622 12:28:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:20.622 12:28:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.622 12:28:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:20.622 ************************************ 00:08:20.622 START TEST dd_sparse_file_to_file 00:08:20.622 ************************************ 00:08:20.622 12:28:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1125 -- # file_to_file 00:08:20.622 12:28:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:08:20.622 12:28:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:08:20.622 12:28:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:20.622 12:28:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:08:20.622 12:28:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:08:20.622 12:28:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:08:20.622 12:28:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:08:20.622 12:28:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:08:20.622 12:28:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:20.622 12:28:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:20.882 [2024-11-19 12:28:25.892391] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:20.882 [2024-11-19 12:28:25.892500] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74404 ] 00:08:20.882 { 00:08:20.882 "subsystems": [ 00:08:20.882 { 00:08:20.882 "subsystem": "bdev", 00:08:20.882 "config": [ 00:08:20.882 { 00:08:20.882 "params": { 00:08:20.882 "block_size": 4096, 00:08:20.882 "filename": "dd_sparse_aio_disk", 00:08:20.882 "name": "dd_aio" 00:08:20.882 }, 00:08:20.882 "method": "bdev_aio_create" 00:08:20.882 }, 00:08:20.882 { 00:08:20.882 "params": { 00:08:20.882 "lvs_name": "dd_lvstore", 00:08:20.882 "bdev_name": "dd_aio" 00:08:20.882 }, 00:08:20.882 "method": "bdev_lvol_create_lvstore" 00:08:20.882 }, 00:08:20.882 { 00:08:20.882 "method": "bdev_wait_for_examine" 00:08:20.882 } 00:08:20.882 ] 00:08:20.882 } 00:08:20.882 ] 00:08:20.882 } 00:08:20.882 [2024-11-19 12:28:26.031115] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.882 [2024-11-19 12:28:26.061800] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.882 [2024-11-19 12:28:26.090789] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:21.153  [2024-11-19T12:28:26.413Z] Copying: 12/36 [MB] (average 1000 MBps) 00:08:21.153 00:08:21.154 12:28:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:08:21.154 12:28:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:08:21.154 12:28:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:08:21.154 12:28:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:08:21.154 12:28:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:21.154 12:28:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:08:21.154 12:28:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:08:21.154 12:28:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:08:21.154 12:28:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:08:21.154 12:28:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:21.154 00:08:21.154 real 0m0.507s 00:08:21.154 user 0m0.304s 00:08:21.154 sys 0m0.242s 00:08:21.154 12:28:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:21.154 ************************************ 00:08:21.154 END TEST dd_sparse_file_to_file 00:08:21.154 12:28:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:21.154 ************************************ 00:08:21.154 12:28:26 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:08:21.154 12:28:26 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:21.154 12:28:26 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:21.154 12:28:26 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:21.154 ************************************ 00:08:21.154 START TEST dd_sparse_file_to_bdev 00:08:21.154 ************************************ 00:08:21.154 12:28:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1125 -- # file_to_bdev 00:08:21.154 12:28:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:21.154 12:28:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:08:21.154 12:28:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:08:21.154 12:28:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:08:21.154 12:28:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:08:21.154 12:28:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:08:21.154 12:28:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:21.154 12:28:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:21.427 [2024-11-19 12:28:26.453157] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:21.427 [2024-11-19 12:28:26.453260] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74441 ] 00:08:21.427 { 00:08:21.427 "subsystems": [ 00:08:21.427 { 00:08:21.427 "subsystem": "bdev", 00:08:21.427 "config": [ 00:08:21.427 { 00:08:21.427 "params": { 00:08:21.427 "block_size": 4096, 00:08:21.427 "filename": "dd_sparse_aio_disk", 00:08:21.427 "name": "dd_aio" 00:08:21.427 }, 00:08:21.427 "method": "bdev_aio_create" 00:08:21.427 }, 00:08:21.427 { 00:08:21.427 "params": { 00:08:21.427 "lvs_name": "dd_lvstore", 00:08:21.427 "lvol_name": "dd_lvol", 00:08:21.427 "size_in_mib": 36, 00:08:21.427 "thin_provision": true 00:08:21.427 }, 00:08:21.427 "method": "bdev_lvol_create" 00:08:21.427 }, 00:08:21.427 { 00:08:21.427 "method": "bdev_wait_for_examine" 00:08:21.427 } 00:08:21.427 ] 00:08:21.427 } 00:08:21.427 ] 00:08:21.427 } 00:08:21.427 [2024-11-19 12:28:26.591299] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.427 [2024-11-19 12:28:26.622097] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.427 [2024-11-19 12:28:26.648462] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:21.686  [2024-11-19T12:28:26.946Z] Copying: 12/36 [MB] (average 571 MBps) 00:08:21.686 00:08:21.686 00:08:21.686 real 0m0.469s 00:08:21.686 user 0m0.296s 00:08:21.686 sys 0m0.235s 00:08:21.686 12:28:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:21.686 12:28:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:21.686 ************************************ 00:08:21.686 END TEST dd_sparse_file_to_bdev 00:08:21.686 ************************************ 00:08:21.686 12:28:26 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:08:21.686 12:28:26 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:21.686 12:28:26 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:21.686 12:28:26 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:21.686 ************************************ 00:08:21.686 START TEST dd_sparse_bdev_to_file 00:08:21.686 ************************************ 00:08:21.686 12:28:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1125 -- # bdev_to_file 00:08:21.686 12:28:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:08:21.686 12:28:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:08:21.686 12:28:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:21.686 12:28:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:08:21.686 12:28:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:08:21.686 12:28:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:08:21.686 12:28:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:21.686 12:28:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:21.946 [2024-11-19 12:28:26.971065] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:21.946 [2024-11-19 12:28:26.971585] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74479 ] 00:08:21.946 { 00:08:21.946 "subsystems": [ 00:08:21.946 { 00:08:21.946 "subsystem": "bdev", 00:08:21.946 "config": [ 00:08:21.946 { 00:08:21.946 "params": { 00:08:21.946 "block_size": 4096, 00:08:21.946 "filename": "dd_sparse_aio_disk", 00:08:21.946 "name": "dd_aio" 00:08:21.946 }, 00:08:21.946 "method": "bdev_aio_create" 00:08:21.946 }, 00:08:21.946 { 00:08:21.946 "method": "bdev_wait_for_examine" 00:08:21.946 } 00:08:21.946 ] 00:08:21.946 } 00:08:21.946 ] 00:08:21.946 } 00:08:21.946 [2024-11-19 12:28:27.110637] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.946 [2024-11-19 12:28:27.142880] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.946 [2024-11-19 12:28:27.169441] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:22.205  [2024-11-19T12:28:27.465Z] Copying: 12/36 [MB] (average 1090 MBps) 00:08:22.205 00:08:22.205 12:28:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:08:22.205 12:28:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:08:22.206 12:28:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:08:22.206 12:28:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:08:22.206 12:28:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:22.206 12:28:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:08:22.206 12:28:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:08:22.206 12:28:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:08:22.206 12:28:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:08:22.206 12:28:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:22.206 00:08:22.206 real 0m0.481s 00:08:22.206 user 0m0.295s 00:08:22.206 sys 0m0.228s 00:08:22.206 12:28:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:22.206 12:28:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:22.206 ************************************ 00:08:22.206 END TEST dd_sparse_bdev_to_file 00:08:22.206 ************************************ 00:08:22.206 12:28:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:08:22.206 12:28:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:08:22.206 12:28:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:08:22.206 12:28:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:08:22.206 12:28:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:08:22.206 00:08:22.206 real 0m1.858s 00:08:22.206 user 0m1.075s 00:08:22.206 sys 0m0.918s 00:08:22.206 12:28:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:22.206 12:28:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:22.206 ************************************ 00:08:22.206 END TEST spdk_dd_sparse 00:08:22.206 ************************************ 00:08:22.466 12:28:27 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:22.466 12:28:27 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:22.466 12:28:27 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:22.466 12:28:27 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:22.466 ************************************ 00:08:22.466 START TEST spdk_dd_negative 00:08:22.466 ************************************ 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:22.466 * Looking for test storage... 00:08:22.466 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1681 -- # lcov --version 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:22.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.466 --rc genhtml_branch_coverage=1 00:08:22.466 --rc genhtml_function_coverage=1 00:08:22.466 --rc genhtml_legend=1 00:08:22.466 --rc geninfo_all_blocks=1 00:08:22.466 --rc geninfo_unexecuted_blocks=1 00:08:22.466 00:08:22.466 ' 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:22.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.466 --rc genhtml_branch_coverage=1 00:08:22.466 --rc genhtml_function_coverage=1 00:08:22.466 --rc genhtml_legend=1 00:08:22.466 --rc geninfo_all_blocks=1 00:08:22.466 --rc geninfo_unexecuted_blocks=1 00:08:22.466 00:08:22.466 ' 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:22.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.466 --rc genhtml_branch_coverage=1 00:08:22.466 --rc genhtml_function_coverage=1 00:08:22.466 --rc genhtml_legend=1 00:08:22.466 --rc geninfo_all_blocks=1 00:08:22.466 --rc geninfo_unexecuted_blocks=1 00:08:22.466 00:08:22.466 ' 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:22.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.466 --rc genhtml_branch_coverage=1 00:08:22.466 --rc genhtml_function_coverage=1 00:08:22.466 --rc genhtml_legend=1 00:08:22.466 --rc geninfo_all_blocks=1 00:08:22.466 --rc geninfo_unexecuted_blocks=1 00:08:22.466 00:08:22.466 ' 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:22.466 12:28:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:22.467 12:28:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:08:22.467 12:28:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:22.467 12:28:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:22.467 12:28:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:22.727 ************************************ 00:08:22.727 START TEST dd_invalid_arguments 00:08:22.727 ************************************ 00:08:22.727 12:28:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1125 -- # invalid_arguments 00:08:22.727 12:28:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:22.727 12:28:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # local es=0 00:08:22.727 12:28:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:22.727 12:28:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.727 12:28:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.727 12:28:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.727 12:28:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.727 12:28:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.727 12:28:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.727 12:28:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.727 12:28:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:22.727 12:28:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:22.727 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:08:22.727 00:08:22.727 CPU options: 00:08:22.727 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:08:22.727 (like [0,1,10]) 00:08:22.727 --lcores lcore to CPU mapping list. The list is in the format: 00:08:22.727 [<,lcores[@CPUs]>...] 00:08:22.727 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:22.727 Within the group, '-' is used for range separator, 00:08:22.727 ',' is used for single number separator. 00:08:22.727 '( )' can be omitted for single element group, 00:08:22.727 '@' can be omitted if cpus and lcores have the same value 00:08:22.727 --disable-cpumask-locks Disable CPU core lock files. 00:08:22.727 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:08:22.727 pollers in the app support interrupt mode) 00:08:22.727 -p, --main-core main (primary) core for DPDK 00:08:22.727 00:08:22.727 Configuration options: 00:08:22.727 -c, --config, --json JSON config file 00:08:22.727 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:22.727 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:08:22.727 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:22.727 --rpcs-allowed comma-separated list of permitted RPCS 00:08:22.727 --json-ignore-init-errors don't exit on invalid config entry 00:08:22.727 00:08:22.727 Memory options: 00:08:22.727 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:22.727 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:22.727 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:22.727 -R, --huge-unlink unlink huge files after initialization 00:08:22.727 -n, --mem-channels number of memory channels used for DPDK 00:08:22.727 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:22.727 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:22.727 --no-huge run without using hugepages 00:08:22.727 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:08:22.727 -i, --shm-id shared memory ID (optional) 00:08:22.727 -g, --single-file-segments force creating just one hugetlbfs file 00:08:22.727 00:08:22.727 PCI options: 00:08:22.727 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:22.727 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:22.727 -u, --no-pci disable PCI access 00:08:22.727 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:22.727 00:08:22.727 Log options: 00:08:22.727 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:08:22.727 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:08:22.727 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:08:22.727 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:08:22.727 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, fuse_dispatcher, 00:08:22.727 gpt_parse, idxd, ioat, iscsi_init, json_util, keyring, log_rpc, lvol, 00:08:22.727 lvol_rpc, notify_rpc, nvme, nvme_auth, nvme_cuse, nvme_vfio, opal, 00:08:22.727 reactor, rpc, rpc_client, scsi, sock, sock_posix, spdk_aio_mgr_io, 00:08:22.727 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:08:22.727 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, vfu, 00:08:22.727 vfu_virtio, vfu_virtio_blk, vfu_virtio_fs, vfu_virtio_fs_data, 00:08:22.727 vfu_virtio_io, vfu_virtio_scsi, vfu_virtio_scsi_data, virtio, 00:08:22.727 virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:08:22.727 --silence-noticelog disable notice level logging to stderr 00:08:22.727 00:08:22.727 Trace options: 00:08:22.727 --num-trace-entries number of trace entries for each core, must be power of 2, 00:08:22.727 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:08:22.727 [2024-11-19 12:28:27.789315] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:08:22.728 setting 0 to disable trace (default 32768) 00:08:22.728 Tracepoints vary in size and can use more than one trace entry. 00:08:22.728 -e, --tpoint-group [:] 00:08:22.728 group_name - tracepoint group name for spdk trace buffers (scsi, bdev, 00:08:22.728 ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, 00:08:22.728 blob, bdev_raid, all). 00:08:22.728 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:08:22.728 a tracepoint group. First tpoint inside a group can be enabled by 00:08:22.728 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:08:22.728 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:08:22.728 in /include/spdk_internal/trace_defs.h 00:08:22.728 00:08:22.728 Other options: 00:08:22.728 -h, --help show this usage 00:08:22.728 -v, --version print SPDK version 00:08:22.728 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:22.728 --env-context Opaque context for use of the env implementation 00:08:22.728 00:08:22.728 Application specific: 00:08:22.728 [--------- DD Options ---------] 00:08:22.728 --if Input file. Must specify either --if or --ib. 00:08:22.728 --ib Input bdev. Must specifier either --if or --ib 00:08:22.728 --of Output file. Must specify either --of or --ob. 00:08:22.728 --ob Output bdev. Must specify either --of or --ob. 00:08:22.728 --iflag Input file flags. 00:08:22.728 --oflag Output file flags. 00:08:22.728 --bs I/O unit size (default: 4096) 00:08:22.728 --qd Queue depth (default: 2) 00:08:22.728 --count I/O unit count. The number of I/O units to copy. (default: all) 00:08:22.728 --skip Skip this many I/O units at start of input. (default: 0) 00:08:22.728 --seek Skip this many I/O units at start of output. (default: 0) 00:08:22.728 --aio Force usage of AIO. (by default io_uring is used if available) 00:08:22.728 --sparse Enable hole skipping in input target 00:08:22.728 Available iflag and oflag values: 00:08:22.728 append - append mode 00:08:22.728 direct - use direct I/O for data 00:08:22.728 directory - fail unless a directory 00:08:22.728 dsync - use synchronized I/O for data 00:08:22.728 noatime - do not update access time 00:08:22.728 noctty - do not assign controlling terminal from file 00:08:22.728 nofollow - do not follow symlinks 00:08:22.728 nonblock - use non-blocking I/O 00:08:22.728 sync - use synchronized I/O for data and metadata 00:08:22.728 12:28:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # es=2 00:08:22.728 12:28:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:22.728 12:28:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:22.728 12:28:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:22.728 00:08:22.728 real 0m0.083s 00:08:22.728 user 0m0.054s 00:08:22.728 sys 0m0.025s 00:08:22.728 12:28:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:22.728 12:28:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:08:22.728 ************************************ 00:08:22.728 END TEST dd_invalid_arguments 00:08:22.728 ************************************ 00:08:22.728 12:28:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:08:22.728 12:28:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:22.728 12:28:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:22.728 12:28:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:22.728 ************************************ 00:08:22.728 START TEST dd_double_input 00:08:22.728 ************************************ 00:08:22.728 12:28:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1125 -- # double_input 00:08:22.728 12:28:27 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:22.728 12:28:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # local es=0 00:08:22.728 12:28:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:22.728 12:28:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.728 12:28:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.728 12:28:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.728 12:28:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.728 12:28:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.728 12:28:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.728 12:28:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.728 12:28:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:22.728 12:28:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:22.728 [2024-11-19 12:28:27.911721] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:08:22.728 12:28:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # es=22 00:08:22.728 12:28:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:22.728 12:28:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:22.728 12:28:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:22.728 00:08:22.728 real 0m0.077s 00:08:22.728 user 0m0.049s 00:08:22.728 sys 0m0.026s 00:08:22.728 12:28:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:22.728 12:28:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:08:22.728 ************************************ 00:08:22.728 END TEST dd_double_input 00:08:22.728 ************************************ 00:08:22.728 12:28:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:08:22.728 12:28:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:22.728 12:28:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:22.728 12:28:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:22.728 ************************************ 00:08:22.728 START TEST dd_double_output 00:08:22.728 ************************************ 00:08:22.989 12:28:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1125 -- # double_output 00:08:22.989 12:28:27 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:22.989 12:28:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # local es=0 00:08:22.989 12:28:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:22.989 12:28:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.989 12:28:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.989 12:28:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.989 12:28:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.989 12:28:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.989 12:28:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.989 12:28:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.989 12:28:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:22.989 12:28:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:22.989 [2024-11-19 12:28:28.041924] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:08:22.989 12:28:28 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # es=22 00:08:22.989 12:28:28 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:22.989 12:28:28 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:22.989 12:28:28 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:22.989 00:08:22.989 real 0m0.077s 00:08:22.989 user 0m0.049s 00:08:22.989 sys 0m0.027s 00:08:22.989 12:28:28 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:22.989 12:28:28 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:08:22.989 ************************************ 00:08:22.989 END TEST dd_double_output 00:08:22.989 ************************************ 00:08:22.989 12:28:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:08:22.989 12:28:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:22.989 12:28:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:22.989 12:28:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:22.989 ************************************ 00:08:22.989 START TEST dd_no_input 00:08:22.989 ************************************ 00:08:22.989 12:28:28 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1125 -- # no_input 00:08:22.989 12:28:28 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:22.989 12:28:28 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # local es=0 00:08:22.989 12:28:28 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:22.989 12:28:28 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.989 12:28:28 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.989 12:28:28 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.989 12:28:28 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.989 12:28:28 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.989 12:28:28 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.989 12:28:28 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.989 12:28:28 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:22.989 12:28:28 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:22.989 [2024-11-19 12:28:28.161811] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:08:22.989 12:28:28 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # es=22 00:08:22.989 12:28:28 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:22.989 12:28:28 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:22.989 12:28:28 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:22.989 00:08:22.989 real 0m0.061s 00:08:22.989 user 0m0.038s 00:08:22.989 sys 0m0.023s 00:08:22.989 12:28:28 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:22.989 12:28:28 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:08:22.989 ************************************ 00:08:22.989 END TEST dd_no_input 00:08:22.989 ************************************ 00:08:22.989 12:28:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:08:22.989 12:28:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:22.989 12:28:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:22.989 12:28:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:22.989 ************************************ 00:08:22.989 START TEST dd_no_output 00:08:22.989 ************************************ 00:08:22.989 12:28:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1125 -- # no_output 00:08:22.989 12:28:28 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:22.989 12:28:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # local es=0 00:08:22.989 12:28:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:22.989 12:28:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.989 12:28:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.989 12:28:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.989 12:28:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.989 12:28:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.989 12:28:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.989 12:28:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.989 12:28:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:22.989 12:28:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:23.248 [2024-11-19 12:28:28.285804] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:08:23.248 12:28:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # es=22 00:08:23.248 12:28:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:23.248 12:28:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:23.248 12:28:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:23.248 00:08:23.248 real 0m0.079s 00:08:23.248 user 0m0.057s 00:08:23.248 sys 0m0.021s 00:08:23.248 12:28:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:23.248 12:28:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:08:23.248 ************************************ 00:08:23.248 END TEST dd_no_output 00:08:23.248 ************************************ 00:08:23.248 12:28:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:08:23.248 12:28:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:23.248 12:28:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:23.248 12:28:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:23.248 ************************************ 00:08:23.248 START TEST dd_wrong_blocksize 00:08:23.248 ************************************ 00:08:23.248 12:28:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1125 -- # wrong_blocksize 00:08:23.248 12:28:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:23.248 12:28:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:08:23.248 12:28:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:23.248 12:28:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.248 12:28:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:23.248 12:28:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.248 12:28:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:23.248 12:28:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.248 12:28:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:23.248 12:28:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.248 12:28:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:23.248 12:28:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:23.248 [2024-11-19 12:28:28.409018] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:08:23.248 12:28:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # es=22 00:08:23.248 12:28:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:23.248 12:28:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:23.248 12:28:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:23.248 00:08:23.248 real 0m0.062s 00:08:23.248 user 0m0.037s 00:08:23.248 sys 0m0.024s 00:08:23.248 12:28:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:23.248 12:28:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:23.248 ************************************ 00:08:23.249 END TEST dd_wrong_blocksize 00:08:23.249 ************************************ 00:08:23.249 12:28:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:08:23.249 12:28:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:23.249 12:28:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:23.249 12:28:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:23.249 ************************************ 00:08:23.249 START TEST dd_smaller_blocksize 00:08:23.249 ************************************ 00:08:23.249 12:28:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1125 -- # smaller_blocksize 00:08:23.249 12:28:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:23.249 12:28:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:08:23.249 12:28:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:23.249 12:28:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.249 12:28:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:23.249 12:28:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.249 12:28:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:23.249 12:28:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.249 12:28:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:23.249 12:28:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.249 12:28:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:23.249 12:28:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:23.508 [2024-11-19 12:28:28.520749] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:23.508 [2024-11-19 12:28:28.520838] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74700 ] 00:08:23.508 [2024-11-19 12:28:28.648543] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.508 [2024-11-19 12:28:28.680421] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.508 [2024-11-19 12:28:28.705926] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:23.508 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:23.508 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:23.508 [2024-11-19 12:28:28.720204] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:08:23.508 [2024-11-19 12:28:28.720229] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:23.768 [2024-11-19 12:28:28.775356] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:23.768 12:28:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # es=244 00:08:23.768 12:28:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:23.768 12:28:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@662 -- # es=116 00:08:23.768 12:28:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # case "$es" in 00:08:23.768 12:28:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@670 -- # es=1 00:08:23.768 12:28:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:23.768 00:08:23.768 real 0m0.377s 00:08:23.768 user 0m0.181s 00:08:23.768 sys 0m0.092s 00:08:23.768 ************************************ 00:08:23.768 END TEST dd_smaller_blocksize 00:08:23.768 ************************************ 00:08:23.768 12:28:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:23.768 12:28:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:23.768 12:28:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:08:23.768 12:28:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:23.768 12:28:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:23.768 12:28:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:23.768 ************************************ 00:08:23.768 START TEST dd_invalid_count 00:08:23.768 ************************************ 00:08:23.768 12:28:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1125 -- # invalid_count 00:08:23.768 12:28:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:23.768 12:28:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # local es=0 00:08:23.768 12:28:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:23.768 12:28:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.768 12:28:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:23.768 12:28:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.768 12:28:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:23.768 12:28:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.768 12:28:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:23.768 12:28:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.768 12:28:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:23.768 12:28:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:23.768 [2024-11-19 12:28:28.959579] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:08:23.768 12:28:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # es=22 00:08:23.768 12:28:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:23.768 12:28:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:23.768 12:28:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:23.768 00:08:23.768 real 0m0.081s 00:08:23.768 user 0m0.050s 00:08:23.768 sys 0m0.030s 00:08:23.768 ************************************ 00:08:23.768 END TEST dd_invalid_count 00:08:23.768 ************************************ 00:08:23.768 12:28:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:23.768 12:28:28 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:08:23.768 12:28:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:08:23.768 12:28:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:23.768 12:28:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:23.768 12:28:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:24.028 ************************************ 00:08:24.028 START TEST dd_invalid_oflag 00:08:24.028 ************************************ 00:08:24.028 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1125 -- # invalid_oflag 00:08:24.028 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:24.028 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # local es=0 00:08:24.028 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:24.028 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.028 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.028 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.028 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.028 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.028 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.028 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.028 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:24.028 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:24.028 [2024-11-19 12:28:29.087219] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:08:24.028 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # es=22 00:08:24.028 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:24.028 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:24.028 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:24.028 00:08:24.028 real 0m0.073s 00:08:24.028 user 0m0.046s 00:08:24.028 sys 0m0.026s 00:08:24.028 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.028 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:08:24.028 ************************************ 00:08:24.028 END TEST dd_invalid_oflag 00:08:24.028 ************************************ 00:08:24.028 12:28:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:08:24.028 12:28:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:24.028 12:28:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:24.028 12:28:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:24.028 ************************************ 00:08:24.028 START TEST dd_invalid_iflag 00:08:24.028 ************************************ 00:08:24.028 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1125 -- # invalid_iflag 00:08:24.028 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:24.028 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # local es=0 00:08:24.028 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:24.028 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.028 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.028 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.028 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.028 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.028 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.028 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.028 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:24.028 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:24.028 [2024-11-19 12:28:29.217284] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:08:24.028 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # es=22 00:08:24.028 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:24.028 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:24.028 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:24.028 00:08:24.028 real 0m0.077s 00:08:24.029 user 0m0.051s 00:08:24.029 sys 0m0.024s 00:08:24.029 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.029 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:08:24.029 ************************************ 00:08:24.029 END TEST dd_invalid_iflag 00:08:24.029 ************************************ 00:08:24.029 12:28:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:08:24.029 12:28:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:24.029 12:28:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:24.029 12:28:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:24.287 ************************************ 00:08:24.287 START TEST dd_unknown_flag 00:08:24.287 ************************************ 00:08:24.287 12:28:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1125 -- # unknown_flag 00:08:24.287 12:28:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:24.287 12:28:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # local es=0 00:08:24.287 12:28:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:24.287 12:28:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.287 12:28:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.287 12:28:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.287 12:28:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.287 12:28:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.287 12:28:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.287 12:28:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.287 12:28:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:24.287 12:28:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:24.287 [2024-11-19 12:28:29.349272] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:24.287 [2024-11-19 12:28:29.349362] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74792 ] 00:08:24.287 [2024-11-19 12:28:29.489907] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.287 [2024-11-19 12:28:29.531711] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.547 [2024-11-19 12:28:29.563900] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:24.547 [2024-11-19 12:28:29.581787] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:24.547 [2024-11-19 12:28:29.581849] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:24.547 [2024-11-19 12:28:29.581909] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:24.547 [2024-11-19 12:28:29.581927] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:24.547 [2024-11-19 12:28:29.582179] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:08:24.547 [2024-11-19 12:28:29.582199] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:24.547 [2024-11-19 12:28:29.582254] app.c:1046:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:24.547 [2024-11-19 12:28:29.582267] app.c:1046:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:24.547 [2024-11-19 12:28:29.646274] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:24.547 12:28:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # es=234 00:08:24.547 12:28:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:24.547 12:28:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@662 -- # es=106 00:08:24.547 12:28:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # case "$es" in 00:08:24.547 12:28:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@670 -- # es=1 00:08:24.547 12:28:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:24.547 00:08:24.547 real 0m0.430s 00:08:24.547 user 0m0.226s 00:08:24.547 sys 0m0.110s 00:08:24.547 12:28:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.547 ************************************ 00:08:24.547 END TEST dd_unknown_flag 00:08:24.547 ************************************ 00:08:24.547 12:28:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:08:24.547 12:28:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:08:24.547 12:28:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:24.547 12:28:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:24.547 12:28:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:24.547 ************************************ 00:08:24.547 START TEST dd_invalid_json 00:08:24.547 ************************************ 00:08:24.547 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1125 -- # invalid_json 00:08:24.547 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:24.547 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # local es=0 00:08:24.547 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:08:24.547 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:24.547 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.547 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.547 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.547 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.547 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.547 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.547 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.547 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:24.547 12:28:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:24.807 [2024-11-19 12:28:29.834440] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:24.807 [2024-11-19 12:28:29.834532] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74815 ] 00:08:24.807 [2024-11-19 12:28:29.973712] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.807 [2024-11-19 12:28:30.014544] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.807 [2024-11-19 12:28:30.014617] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:08:24.807 [2024-11-19 12:28:30.014633] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:24.807 [2024-11-19 12:28:30.014644] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:24.807 [2024-11-19 12:28:30.014703] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:25.065 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # es=234 00:08:25.065 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:25.065 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@662 -- # es=106 00:08:25.065 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # case "$es" in 00:08:25.065 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@670 -- # es=1 00:08:25.065 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:25.065 00:08:25.065 real 0m0.315s 00:08:25.065 user 0m0.152s 00:08:25.065 sys 0m0.061s 00:08:25.065 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:25.065 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:08:25.065 ************************************ 00:08:25.065 END TEST dd_invalid_json 00:08:25.065 ************************************ 00:08:25.065 12:28:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:08:25.065 12:28:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:25.065 12:28:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:25.065 12:28:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:25.065 ************************************ 00:08:25.065 START TEST dd_invalid_seek 00:08:25.065 ************************************ 00:08:25.065 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1125 -- # invalid_seek 00:08:25.065 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:25.065 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:25.066 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:08:25.066 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:25.066 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:25.066 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:08:25.066 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:25.066 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@650 -- # local es=0 00:08:25.066 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:25.066 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:08:25.066 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.066 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:08:25.066 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:08:25.066 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.066 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.066 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.066 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.066 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.066 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.066 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:25.066 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:25.066 [2024-11-19 12:28:30.202335] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:25.066 [2024-11-19 12:28:30.203006] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74850 ] 00:08:25.066 { 00:08:25.066 "subsystems": [ 00:08:25.066 { 00:08:25.066 "subsystem": "bdev", 00:08:25.066 "config": [ 00:08:25.066 { 00:08:25.066 "params": { 00:08:25.066 "block_size": 512, 00:08:25.066 "num_blocks": 512, 00:08:25.066 "name": "malloc0" 00:08:25.066 }, 00:08:25.066 "method": "bdev_malloc_create" 00:08:25.066 }, 00:08:25.066 { 00:08:25.066 "params": { 00:08:25.066 "block_size": 512, 00:08:25.066 "num_blocks": 512, 00:08:25.066 "name": "malloc1" 00:08:25.066 }, 00:08:25.066 "method": "bdev_malloc_create" 00:08:25.066 }, 00:08:25.066 { 00:08:25.066 "method": "bdev_wait_for_examine" 00:08:25.066 } 00:08:25.066 ] 00:08:25.066 } 00:08:25.066 ] 00:08:25.066 } 00:08:25.325 [2024-11-19 12:28:30.345236] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.325 [2024-11-19 12:28:30.385858] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.325 [2024-11-19 12:28:30.419589] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:25.325 [2024-11-19 12:28:30.463830] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:08:25.325 [2024-11-19 12:28:30.463899] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:25.325 [2024-11-19 12:28:30.527808] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:25.585 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # es=228 00:08:25.585 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:25.585 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@662 -- # es=100 00:08:25.585 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # case "$es" in 00:08:25.585 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@670 -- # es=1 00:08:25.585 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:25.585 00:08:25.585 real 0m0.458s 00:08:25.585 user 0m0.287s 00:08:25.585 sys 0m0.132s 00:08:25.585 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:25.585 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:08:25.585 ************************************ 00:08:25.585 END TEST dd_invalid_seek 00:08:25.585 ************************************ 00:08:25.585 12:28:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:08:25.585 12:28:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:25.585 12:28:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:25.585 12:28:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:25.585 ************************************ 00:08:25.585 START TEST dd_invalid_skip 00:08:25.585 ************************************ 00:08:25.585 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1125 -- # invalid_skip 00:08:25.585 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:25.585 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:25.585 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:08:25.585 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:25.585 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:25.585 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:08:25.585 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:25.585 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:08:25.585 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@650 -- # local es=0 00:08:25.585 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:25.585 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.585 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:08:25.585 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:08:25.585 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.585 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.585 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.585 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.585 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.585 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.585 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:25.585 12:28:30 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:25.585 [2024-11-19 12:28:30.703904] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:25.585 [2024-11-19 12:28:30.704014] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74878 ] 00:08:25.585 { 00:08:25.585 "subsystems": [ 00:08:25.585 { 00:08:25.585 "subsystem": "bdev", 00:08:25.585 "config": [ 00:08:25.585 { 00:08:25.585 "params": { 00:08:25.585 "block_size": 512, 00:08:25.585 "num_blocks": 512, 00:08:25.585 "name": "malloc0" 00:08:25.585 }, 00:08:25.585 "method": "bdev_malloc_create" 00:08:25.585 }, 00:08:25.585 { 00:08:25.585 "params": { 00:08:25.585 "block_size": 512, 00:08:25.585 "num_blocks": 512, 00:08:25.585 "name": "malloc1" 00:08:25.585 }, 00:08:25.585 "method": "bdev_malloc_create" 00:08:25.585 }, 00:08:25.585 { 00:08:25.585 "method": "bdev_wait_for_examine" 00:08:25.585 } 00:08:25.585 ] 00:08:25.585 } 00:08:25.585 ] 00:08:25.585 } 00:08:25.585 [2024-11-19 12:28:30.838475] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.844 [2024-11-19 12:28:30.873955] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.844 [2024-11-19 12:28:30.903930] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:25.844 [2024-11-19 12:28:30.945429] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:08:25.844 [2024-11-19 12:28:30.945497] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:25.844 [2024-11-19 12:28:31.000651] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:25.844 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # es=228 00:08:25.844 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:25.844 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@662 -- # es=100 00:08:25.844 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # case "$es" in 00:08:25.844 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@670 -- # es=1 00:08:25.844 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:25.844 00:08:25.844 real 0m0.415s 00:08:25.844 user 0m0.267s 00:08:25.844 sys 0m0.112s 00:08:25.844 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:25.844 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:08:25.844 ************************************ 00:08:25.844 END TEST dd_invalid_skip 00:08:25.844 ************************************ 00:08:26.104 12:28:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:08:26.104 12:28:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:26.104 12:28:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:26.104 12:28:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:26.104 ************************************ 00:08:26.104 START TEST dd_invalid_input_count 00:08:26.104 ************************************ 00:08:26.104 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1125 -- # invalid_input_count 00:08:26.104 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:26.104 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:26.104 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:08:26.104 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:26.104 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:26.104 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:08:26.104 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:26.104 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:08:26.104 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@650 -- # local es=0 00:08:26.104 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:26.104 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:08:26.104 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.104 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:08:26.104 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.104 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.104 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.104 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.104 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.104 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.104 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:26.104 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:26.104 { 00:08:26.104 "subsystems": [ 00:08:26.104 { 00:08:26.104 "subsystem": "bdev", 00:08:26.104 "config": [ 00:08:26.104 { 00:08:26.104 "params": { 00:08:26.104 "block_size": 512, 00:08:26.104 "num_blocks": 512, 00:08:26.104 "name": "malloc0" 00:08:26.104 }, 00:08:26.104 "method": "bdev_malloc_create" 00:08:26.104 }, 00:08:26.104 { 00:08:26.104 "params": { 00:08:26.104 "block_size": 512, 00:08:26.104 "num_blocks": 512, 00:08:26.104 "name": "malloc1" 00:08:26.104 }, 00:08:26.104 "method": "bdev_malloc_create" 00:08:26.104 }, 00:08:26.104 { 00:08:26.104 "method": "bdev_wait_for_examine" 00:08:26.104 } 00:08:26.104 ] 00:08:26.104 } 00:08:26.104 ] 00:08:26.104 } 00:08:26.105 [2024-11-19 12:28:31.177155] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:26.105 [2024-11-19 12:28:31.177251] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74917 ] 00:08:26.105 [2024-11-19 12:28:31.315333] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.105 [2024-11-19 12:28:31.349313] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.364 [2024-11-19 12:28:31.377221] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:26.364 [2024-11-19 12:28:31.417427] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:08:26.364 [2024-11-19 12:28:31.417501] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:26.364 [2024-11-19 12:28:31.477691] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:26.364 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # es=228 00:08:26.364 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:26.364 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@662 -- # es=100 00:08:26.364 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # case "$es" in 00:08:26.364 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@670 -- # es=1 00:08:26.364 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:26.364 00:08:26.364 real 0m0.425s 00:08:26.364 user 0m0.275s 00:08:26.364 sys 0m0.108s 00:08:26.364 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:26.364 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:08:26.364 ************************************ 00:08:26.364 END TEST dd_invalid_input_count 00:08:26.364 ************************************ 00:08:26.364 12:28:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:08:26.364 12:28:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:26.364 12:28:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:26.364 12:28:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:26.364 ************************************ 00:08:26.364 START TEST dd_invalid_output_count 00:08:26.364 ************************************ 00:08:26.364 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1125 -- # invalid_output_count 00:08:26.364 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:26.364 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:26.364 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:08:26.364 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:26.364 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@650 -- # local es=0 00:08:26.364 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:26.364 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:08:26.364 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.364 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:08:26.364 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:08:26.364 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.364 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.364 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.364 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.364 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.364 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.364 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:26.364 12:28:31 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:26.624 { 00:08:26.624 "subsystems": [ 00:08:26.624 { 00:08:26.624 "subsystem": "bdev", 00:08:26.624 "config": [ 00:08:26.624 { 00:08:26.624 "params": { 00:08:26.624 "block_size": 512, 00:08:26.624 "num_blocks": 512, 00:08:26.624 "name": "malloc0" 00:08:26.624 }, 00:08:26.624 "method": "bdev_malloc_create" 00:08:26.624 }, 00:08:26.624 { 00:08:26.624 "method": "bdev_wait_for_examine" 00:08:26.624 } 00:08:26.624 ] 00:08:26.624 } 00:08:26.624 ] 00:08:26.624 } 00:08:26.624 [2024-11-19 12:28:31.655025] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:26.624 [2024-11-19 12:28:31.655116] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74945 ] 00:08:26.624 [2024-11-19 12:28:31.794846] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.624 [2024-11-19 12:28:31.825548] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.624 [2024-11-19 12:28:31.852032] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:26.884 [2024-11-19 12:28:31.886038] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:08:26.884 [2024-11-19 12:28:31.886130] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:26.884 [2024-11-19 12:28:31.946174] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:26.884 12:28:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # es=228 00:08:26.884 12:28:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:26.884 12:28:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@662 -- # es=100 00:08:26.884 12:28:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # case "$es" in 00:08:26.884 12:28:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@670 -- # es=1 00:08:26.884 12:28:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:26.884 00:08:26.884 real 0m0.418s 00:08:26.884 user 0m0.265s 00:08:26.884 sys 0m0.108s 00:08:26.884 12:28:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:26.884 12:28:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:08:26.884 ************************************ 00:08:26.884 END TEST dd_invalid_output_count 00:08:26.884 ************************************ 00:08:26.884 12:28:32 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:08:26.884 12:28:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:26.884 12:28:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:26.884 12:28:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:26.884 ************************************ 00:08:26.884 START TEST dd_bs_not_multiple 00:08:26.884 ************************************ 00:08:26.884 12:28:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1125 -- # bs_not_multiple 00:08:26.884 12:28:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:26.884 12:28:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:26.884 12:28:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:08:26.884 12:28:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:26.884 12:28:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:26.884 12:28:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:08:26.884 12:28:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:26.884 12:28:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@650 -- # local es=0 00:08:26.884 12:28:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:08:26.884 12:28:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:26.884 12:28:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.884 12:28:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:08:26.884 12:28:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:08:26.884 12:28:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.884 12:28:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.884 12:28:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.884 12:28:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.884 12:28:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.884 12:28:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.884 12:28:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:26.884 12:28:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:26.884 { 00:08:26.884 "subsystems": [ 00:08:26.884 { 00:08:26.884 "subsystem": "bdev", 00:08:26.884 "config": [ 00:08:26.884 { 00:08:26.884 "params": { 00:08:26.884 "block_size": 512, 00:08:26.884 "num_blocks": 512, 00:08:26.884 "name": "malloc0" 00:08:26.884 }, 00:08:26.884 "method": "bdev_malloc_create" 00:08:26.884 }, 00:08:26.884 { 00:08:26.884 "params": { 00:08:26.884 "block_size": 512, 00:08:26.884 "num_blocks": 512, 00:08:26.884 "name": "malloc1" 00:08:26.884 }, 00:08:26.884 "method": "bdev_malloc_create" 00:08:26.884 }, 00:08:26.884 { 00:08:26.884 "method": "bdev_wait_for_examine" 00:08:26.884 } 00:08:26.884 ] 00:08:26.884 } 00:08:26.884 ] 00:08:26.884 } 00:08:26.884 [2024-11-19 12:28:32.123887] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:26.885 [2024-11-19 12:28:32.123982] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74982 ] 00:08:27.144 [2024-11-19 12:28:32.262676] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.144 [2024-11-19 12:28:32.293276] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.144 [2024-11-19 12:28:32.321477] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:27.144 [2024-11-19 12:28:32.361762] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:08:27.144 [2024-11-19 12:28:32.361846] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:27.404 [2024-11-19 12:28:32.418268] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:27.404 12:28:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # es=234 00:08:27.404 12:28:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:27.404 12:28:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@662 -- # es=106 00:08:27.404 12:28:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # case "$es" in 00:08:27.404 12:28:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@670 -- # es=1 00:08:27.404 12:28:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:27.404 00:08:27.404 real 0m0.426s 00:08:27.404 user 0m0.277s 00:08:27.404 sys 0m0.106s 00:08:27.404 ************************************ 00:08:27.404 12:28:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:27.404 12:28:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:08:27.404 END TEST dd_bs_not_multiple 00:08:27.404 ************************************ 00:08:27.404 00:08:27.404 real 0m5.020s 00:08:27.404 user 0m2.764s 00:08:27.404 sys 0m1.658s 00:08:27.404 12:28:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:27.404 ************************************ 00:08:27.404 END TEST spdk_dd_negative 00:08:27.404 ************************************ 00:08:27.404 12:28:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:27.404 00:08:27.404 real 1m3.456s 00:08:27.404 user 0m40.283s 00:08:27.404 sys 0m26.435s 00:08:27.404 12:28:32 spdk_dd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:27.404 ************************************ 00:08:27.404 END TEST spdk_dd 00:08:27.404 ************************************ 00:08:27.404 12:28:32 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:27.404 12:28:32 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:08:27.404 12:28:32 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:08:27.404 12:28:32 -- spdk/autotest.sh@256 -- # timing_exit lib 00:08:27.404 12:28:32 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:27.404 12:28:32 -- common/autotest_common.sh@10 -- # set +x 00:08:27.404 12:28:32 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:08:27.404 12:28:32 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:08:27.404 12:28:32 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:08:27.404 12:28:32 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:08:27.404 12:28:32 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:08:27.404 12:28:32 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:08:27.404 12:28:32 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:27.404 12:28:32 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:27.404 12:28:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:27.404 12:28:32 -- common/autotest_common.sh@10 -- # set +x 00:08:27.404 ************************************ 00:08:27.404 START TEST nvmf_tcp 00:08:27.404 ************************************ 00:08:27.404 12:28:32 nvmf_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:27.664 * Looking for test storage... 00:08:27.664 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:27.664 12:28:32 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:27.664 12:28:32 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:08:27.664 12:28:32 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:27.664 12:28:32 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:27.664 12:28:32 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:27.664 12:28:32 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:27.664 12:28:32 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:27.664 12:28:32 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:27.664 12:28:32 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:27.664 12:28:32 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:27.664 12:28:32 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:27.664 12:28:32 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:27.664 12:28:32 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:27.664 12:28:32 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:27.664 12:28:32 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:27.664 12:28:32 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:27.664 12:28:32 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:08:27.664 12:28:32 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:27.664 12:28:32 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:27.664 12:28:32 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:27.664 12:28:32 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:08:27.664 12:28:32 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:27.664 12:28:32 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:08:27.664 12:28:32 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:27.664 12:28:32 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:27.664 12:28:32 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:08:27.664 12:28:32 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:27.664 12:28:32 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:08:27.664 12:28:32 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:27.664 12:28:32 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:27.664 12:28:32 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:27.664 12:28:32 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:08:27.664 12:28:32 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:27.664 12:28:32 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:27.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.664 --rc genhtml_branch_coverage=1 00:08:27.664 --rc genhtml_function_coverage=1 00:08:27.664 --rc genhtml_legend=1 00:08:27.664 --rc geninfo_all_blocks=1 00:08:27.664 --rc geninfo_unexecuted_blocks=1 00:08:27.664 00:08:27.664 ' 00:08:27.664 12:28:32 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:27.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.664 --rc genhtml_branch_coverage=1 00:08:27.664 --rc genhtml_function_coverage=1 00:08:27.664 --rc genhtml_legend=1 00:08:27.664 --rc geninfo_all_blocks=1 00:08:27.664 --rc geninfo_unexecuted_blocks=1 00:08:27.664 00:08:27.664 ' 00:08:27.664 12:28:32 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:27.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.664 --rc genhtml_branch_coverage=1 00:08:27.664 --rc genhtml_function_coverage=1 00:08:27.664 --rc genhtml_legend=1 00:08:27.664 --rc geninfo_all_blocks=1 00:08:27.664 --rc geninfo_unexecuted_blocks=1 00:08:27.664 00:08:27.664 ' 00:08:27.664 12:28:32 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:27.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.664 --rc genhtml_branch_coverage=1 00:08:27.664 --rc genhtml_function_coverage=1 00:08:27.664 --rc genhtml_legend=1 00:08:27.664 --rc geninfo_all_blocks=1 00:08:27.664 --rc geninfo_unexecuted_blocks=1 00:08:27.664 00:08:27.664 ' 00:08:27.664 12:28:32 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:27.664 12:28:32 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:27.664 12:28:32 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:27.664 12:28:32 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:27.664 12:28:32 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:27.664 12:28:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:27.664 ************************************ 00:08:27.664 START TEST nvmf_target_core 00:08:27.664 ************************************ 00:08:27.664 12:28:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:27.664 * Looking for test storage... 00:08:27.664 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:27.664 12:28:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:27.664 12:28:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:08:27.664 12:28:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:27.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.925 --rc genhtml_branch_coverage=1 00:08:27.925 --rc genhtml_function_coverage=1 00:08:27.925 --rc genhtml_legend=1 00:08:27.925 --rc geninfo_all_blocks=1 00:08:27.925 --rc geninfo_unexecuted_blocks=1 00:08:27.925 00:08:27.925 ' 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:27.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.925 --rc genhtml_branch_coverage=1 00:08:27.925 --rc genhtml_function_coverage=1 00:08:27.925 --rc genhtml_legend=1 00:08:27.925 --rc geninfo_all_blocks=1 00:08:27.925 --rc geninfo_unexecuted_blocks=1 00:08:27.925 00:08:27.925 ' 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:27.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.925 --rc genhtml_branch_coverage=1 00:08:27.925 --rc genhtml_function_coverage=1 00:08:27.925 --rc genhtml_legend=1 00:08:27.925 --rc geninfo_all_blocks=1 00:08:27.925 --rc geninfo_unexecuted_blocks=1 00:08:27.925 00:08:27.925 ' 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:27.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.925 --rc genhtml_branch_coverage=1 00:08:27.925 --rc genhtml_function_coverage=1 00:08:27.925 --rc genhtml_legend=1 00:08:27.925 --rc geninfo_all_blocks=1 00:08:27.925 --rc geninfo_unexecuted_blocks=1 00:08:27.925 00:08:27.925 ' 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:27.925 12:28:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:27.925 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:27.926 12:28:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:27.926 12:28:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:27.926 12:28:33 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:27.926 12:28:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:27.926 12:28:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:08:27.926 12:28:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:08:27.926 12:28:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:27.926 12:28:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:27.926 12:28:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:27.926 12:28:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:27.926 ************************************ 00:08:27.926 START TEST nvmf_host_management 00:08:27.926 ************************************ 00:08:27.926 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:27.926 * Looking for test storage... 00:08:27.926 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:27.926 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:27.926 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:08:27.926 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:28.186 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:28.186 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:28.186 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:28.186 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:28.186 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:28.186 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:28.186 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:28.186 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:28.186 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:28.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.187 --rc genhtml_branch_coverage=1 00:08:28.187 --rc genhtml_function_coverage=1 00:08:28.187 --rc genhtml_legend=1 00:08:28.187 --rc geninfo_all_blocks=1 00:08:28.187 --rc geninfo_unexecuted_blocks=1 00:08:28.187 00:08:28.187 ' 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:28.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.187 --rc genhtml_branch_coverage=1 00:08:28.187 --rc genhtml_function_coverage=1 00:08:28.187 --rc genhtml_legend=1 00:08:28.187 --rc geninfo_all_blocks=1 00:08:28.187 --rc geninfo_unexecuted_blocks=1 00:08:28.187 00:08:28.187 ' 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:28.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.187 --rc genhtml_branch_coverage=1 00:08:28.187 --rc genhtml_function_coverage=1 00:08:28.187 --rc genhtml_legend=1 00:08:28.187 --rc geninfo_all_blocks=1 00:08:28.187 --rc geninfo_unexecuted_blocks=1 00:08:28.187 00:08:28.187 ' 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:28.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.187 --rc genhtml_branch_coverage=1 00:08:28.187 --rc genhtml_function_coverage=1 00:08:28.187 --rc genhtml_legend=1 00:08:28.187 --rc geninfo_all_blocks=1 00:08:28.187 --rc geninfo_unexecuted_blocks=1 00:08:28.187 00:08:28.187 ' 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:28.187 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:28.187 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@456 -- # nvmf_veth_init 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:28.188 Cannot find device "nvmf_init_br" 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:28.188 Cannot find device "nvmf_init_br2" 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:28.188 Cannot find device "nvmf_tgt_br" 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:28.188 Cannot find device "nvmf_tgt_br2" 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:28.188 Cannot find device "nvmf_init_br" 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:28.188 Cannot find device "nvmf_init_br2" 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:28.188 Cannot find device "nvmf_tgt_br" 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:28.188 Cannot find device "nvmf_tgt_br2" 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:28.188 Cannot find device "nvmf_br" 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:28.188 Cannot find device "nvmf_init_if" 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:28.188 Cannot find device "nvmf_init_if2" 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:28.188 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:28.188 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:28.188 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:28.448 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:28.448 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:28.448 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:28.448 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:28.448 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:28.448 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:28.448 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:28.448 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:28.448 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:28.448 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:28.448 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:28.448 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:28.448 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:28.448 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:28.448 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:28.448 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:28.448 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:28.448 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:28.448 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:28.448 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:28.448 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:28.448 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:28.448 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:28.448 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:28.448 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:28.448 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:28.448 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:28.448 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:28.448 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:28.708 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:28.708 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.120 ms 00:08:28.708 00:08:28.708 --- 10.0.0.3 ping statistics --- 00:08:28.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.708 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:08:28.708 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:28.708 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:28.708 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:08:28.708 00:08:28.708 --- 10.0.0.4 ping statistics --- 00:08:28.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.708 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:08:28.708 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:28.708 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:28.708 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:08:28.708 00:08:28.708 --- 10.0.0.1 ping statistics --- 00:08:28.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.708 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:08:28.708 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:28.708 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:28.708 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:08:28.708 00:08:28.708 --- 10.0.0.2 ping statistics --- 00:08:28.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.708 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:08:28.708 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:28.708 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@457 -- # return 0 00:08:28.708 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:28.708 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:28.708 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:28.708 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:28.708 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:28.708 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:28.708 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:28.708 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:28.708 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:28.708 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:28.708 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:28.708 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:28.708 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:28.708 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # nvmfpid=75315 00:08:28.708 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # waitforlisten 75315 00:08:28.708 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:28.708 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 75315 ']' 00:08:28.708 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.708 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:28.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.708 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.708 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:28.708 12:28:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:28.708 [2024-11-19 12:28:33.815068] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:28.708 [2024-11-19 12:28:33.815164] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:28.708 [2024-11-19 12:28:33.954545] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:28.968 [2024-11-19 12:28:33.991370] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:28.968 [2024-11-19 12:28:33.991430] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:28.968 [2024-11-19 12:28:33.991439] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:28.968 [2024-11-19 12:28:33.991446] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:28.968 [2024-11-19 12:28:33.991452] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:28.968 [2024-11-19 12:28:33.991589] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:28.968 [2024-11-19 12:28:33.991716] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:28.968 [2024-11-19 12:28:33.991852] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:08:28.968 [2024-11-19 12:28:33.991858] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.968 [2024-11-19 12:28:34.020448] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:29.907 12:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:29.907 12:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:29.907 12:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:29.907 12:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:29.907 12:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:29.907 12:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:29.907 12:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:29.907 12:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.907 12:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:29.907 [2024-11-19 12:28:34.837947] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:29.907 12:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.907 12:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:29.907 12:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:29.907 12:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:29.907 12:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:29.907 12:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:29.907 12:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:29.907 12:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.907 12:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:29.907 Malloc0 00:08:29.907 [2024-11-19 12:28:34.892558] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:29.907 12:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.907 12:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:29.907 12:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:29.907 12:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:29.907 12:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=75369 00:08:29.907 12:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 75369 /var/tmp/bdevperf.sock 00:08:29.907 12:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 75369 ']' 00:08:29.907 12:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:29.907 12:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:29.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:29.907 12:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:29.907 12:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:29.907 12:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:29.907 12:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:29.907 12:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:29.907 12:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:08:29.907 12:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:08:29.907 12:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:29.907 12:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:29.907 { 00:08:29.907 "params": { 00:08:29.907 "name": "Nvme$subsystem", 00:08:29.907 "trtype": "$TEST_TRANSPORT", 00:08:29.907 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:29.907 "adrfam": "ipv4", 00:08:29.907 "trsvcid": "$NVMF_PORT", 00:08:29.907 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:29.907 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:29.907 "hdgst": ${hdgst:-false}, 00:08:29.907 "ddgst": ${ddgst:-false} 00:08:29.907 }, 00:08:29.907 "method": "bdev_nvme_attach_controller" 00:08:29.907 } 00:08:29.907 EOF 00:08:29.907 )") 00:08:29.907 12:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:08:29.907 12:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:08:29.907 12:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:08:29.907 12:28:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:29.907 "params": { 00:08:29.907 "name": "Nvme0", 00:08:29.907 "trtype": "tcp", 00:08:29.907 "traddr": "10.0.0.3", 00:08:29.907 "adrfam": "ipv4", 00:08:29.907 "trsvcid": "4420", 00:08:29.907 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:29.907 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:29.907 "hdgst": false, 00:08:29.907 "ddgst": false 00:08:29.907 }, 00:08:29.907 "method": "bdev_nvme_attach_controller" 00:08:29.907 }' 00:08:29.907 [2024-11-19 12:28:34.995198] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:29.907 [2024-11-19 12:28:34.995271] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75369 ] 00:08:29.907 [2024-11-19 12:28:35.161581] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.167 [2024-11-19 12:28:35.212191] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.167 [2024-11-19 12:28:35.260199] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:30.167 Running I/O for 10 seconds... 00:08:30.167 12:28:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:30.167 12:28:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:30.167 12:28:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:30.167 12:28:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.167 12:28:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:30.167 12:28:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.167 12:28:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:30.427 12:28:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:30.427 12:28:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:30.427 12:28:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:30.427 12:28:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:30.427 12:28:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:30.427 12:28:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:30.427 12:28:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:30.427 12:28:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:30.427 12:28:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:30.427 12:28:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.427 12:28:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:30.427 12:28:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.427 12:28:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:08:30.427 12:28:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:08:30.427 12:28:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:30.688 12:28:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:30.688 12:28:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:30.688 12:28:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:30.688 12:28:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.688 12:28:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:30.688 12:28:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:30.688 12:28:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.688 12:28:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:08:30.688 12:28:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:08:30.688 12:28:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:30.688 12:28:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:30.688 12:28:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:30.688 12:28:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:30.688 12:28:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.688 12:28:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:30.688 [2024-11-19 12:28:35.795453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.688 [2024-11-19 12:28:35.795509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.688 [2024-11-19 12:28:35.795535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.688 [2024-11-19 12:28:35.795547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.688 [2024-11-19 12:28:35.795558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.688 [2024-11-19 12:28:35.795567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.688 [2024-11-19 12:28:35.795578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.689 [2024-11-19 12:28:35.795587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.689 [2024-11-19 12:28:35.795598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.689 [2024-11-19 12:28:35.795607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.689 [2024-11-19 12:28:35.795618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.689 [2024-11-19 12:28:35.795626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.689 [2024-11-19 12:28:35.795637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.689 [2024-11-19 12:28:35.795646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.689 [2024-11-19 12:28:35.795656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.689 [2024-11-19 12:28:35.795715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.689 [2024-11-19 12:28:35.795736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.689 [2024-11-19 12:28:35.795747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.689 [2024-11-19 12:28:35.795758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.689 [2024-11-19 12:28:35.795768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.689 [2024-11-19 12:28:35.795780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.689 [2024-11-19 12:28:35.795789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.689 [2024-11-19 12:28:35.795800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.689 [2024-11-19 12:28:35.795810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.689 [2024-11-19 12:28:35.795821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.689 [2024-11-19 12:28:35.795830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.689 [2024-11-19 12:28:35.795850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.689 [2024-11-19 12:28:35.795860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.689 [2024-11-19 12:28:35.795871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.689 [2024-11-19 12:28:35.795880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.689 [2024-11-19 12:28:35.795892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.689 [2024-11-19 12:28:35.795901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.689 [2024-11-19 12:28:35.795916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.689 [2024-11-19 12:28:35.795926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.689 [2024-11-19 12:28:35.795938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.689 [2024-11-19 12:28:35.795948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.689 [2024-11-19 12:28:35.795959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.689 [2024-11-19 12:28:35.795969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.689 [2024-11-19 12:28:35.795980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.689 [2024-11-19 12:28:35.795990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.689 [2024-11-19 12:28:35.796001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.689 [2024-11-19 12:28:35.796010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.689 [2024-11-19 12:28:35.796022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.689 [2024-11-19 12:28:35.796031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.689 [2024-11-19 12:28:35.796042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.689 [2024-11-19 12:28:35.796066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.689 [2024-11-19 12:28:35.796077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.689 [2024-11-19 12:28:35.796101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.689 [2024-11-19 12:28:35.796112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.689 [2024-11-19 12:28:35.796121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.689 [2024-11-19 12:28:35.796131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.689 [2024-11-19 12:28:35.796140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.689 [2024-11-19 12:28:35.796151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.689 [2024-11-19 12:28:35.796160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.689 [2024-11-19 12:28:35.796170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.689 [2024-11-19 12:28:35.796179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.689 [2024-11-19 12:28:35.796189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.689 [2024-11-19 12:28:35.796198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.689 [2024-11-19 12:28:35.796208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.689 [2024-11-19 12:28:35.796217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.689 [2024-11-19 12:28:35.796227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.689 [2024-11-19 12:28:35.796236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.689 [2024-11-19 12:28:35.796247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.689 [2024-11-19 12:28:35.796256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.689 [2024-11-19 12:28:35.796268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.689 [2024-11-19 12:28:35.796277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.689 [2024-11-19 12:28:35.796287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.689 [2024-11-19 12:28:35.796296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.689 [2024-11-19 12:28:35.796307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.689 [2024-11-19 12:28:35.796316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.689 [2024-11-19 12:28:35.796327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.689 [2024-11-19 12:28:35.796335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.689 [2024-11-19 12:28:35.796346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.689 [2024-11-19 12:28:35.796355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.689 [2024-11-19 12:28:35.796365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.689 [2024-11-19 12:28:35.796374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.689 [2024-11-19 12:28:35.796385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.689 [2024-11-19 12:28:35.796394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.689 [2024-11-19 12:28:35.796405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.689 [2024-11-19 12:28:35.796413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.689 [2024-11-19 12:28:35.796424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.689 [2024-11-19 12:28:35.796432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.689 [2024-11-19 12:28:35.796443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.689 [2024-11-19 12:28:35.796452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.689 [2024-11-19 12:28:35.796463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.689 [2024-11-19 12:28:35.796472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.690 [2024-11-19 12:28:35.796482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.690 [2024-11-19 12:28:35.796491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.690 [2024-11-19 12:28:35.796501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.690 [2024-11-19 12:28:35.796510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.690 [2024-11-19 12:28:35.796521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.690 [2024-11-19 12:28:35.796529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.690 [2024-11-19 12:28:35.796540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.690 [2024-11-19 12:28:35.796548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.690 [2024-11-19 12:28:35.796559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.690 [2024-11-19 12:28:35.796567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.690 [2024-11-19 12:28:35.796579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.690 [2024-11-19 12:28:35.796588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.690 [2024-11-19 12:28:35.796599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.690 [2024-11-19 12:28:35.796608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.690 [2024-11-19 12:28:35.796618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.690 [2024-11-19 12:28:35.796627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.690 [2024-11-19 12:28:35.796638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.690 [2024-11-19 12:28:35.796647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.690 [2024-11-19 12:28:35.796657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.690 [2024-11-19 12:28:35.796667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.690 [2024-11-19 12:28:35.796716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.690 [2024-11-19 12:28:35.796731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.690 [2024-11-19 12:28:35.796756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.690 [2024-11-19 12:28:35.796767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.690 [2024-11-19 12:28:35.796779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.690 [2024-11-19 12:28:35.796789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.690 [2024-11-19 12:28:35.796800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.690 [2024-11-19 12:28:35.796809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.690 [2024-11-19 12:28:35.796820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.690 [2024-11-19 12:28:35.796830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.690 [2024-11-19 12:28:35.796850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.690 [2024-11-19 12:28:35.796861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.690 [2024-11-19 12:28:35.796872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.690 [2024-11-19 12:28:35.796882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.690 [2024-11-19 12:28:35.796893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.690 [2024-11-19 12:28:35.796902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.690 [2024-11-19 12:28:35.796913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.690 [2024-11-19 12:28:35.796923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.690 [2024-11-19 12:28:35.796934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.690 [2024-11-19 12:28:35.796943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.690 [2024-11-19 12:28:35.796956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:30.690 [2024-11-19 12:28:35.796965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:30.690 [2024-11-19 12:28:35.796978] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210e460 is same with the state(6) to be set 00:08:30.690 [2024-11-19 12:28:35.797027] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x210e460 was disconnected and freed. reset controller. 00:08:30.690 12:28:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.690 12:28:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:30.690 [2024-11-19 12:28:35.798292] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:30.690 12:28:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.690 12:28:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:30.690 task offset: 89984 on job bdev=Nvme0n1 fails 00:08:30.690 00:08:30.690 Latency(us) 00:08:30.690 [2024-11-19T12:28:35.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.690 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:30.690 Job: Nvme0n1 ended in about 0.44 seconds with error 00:08:30.690 Verification LBA range: start 0x0 length 0x400 00:08:30.690 Nvme0n1 : 0.44 1452.17 90.76 145.22 0.00 38792.67 6106.76 35985.22 00:08:30.690 [2024-11-19T12:28:35.950Z] =================================================================================================================== 00:08:30.690 [2024-11-19T12:28:35.950Z] Total : 1452.17 90.76 145.22 0.00 38792.67 6106.76 35985.22 00:08:30.690 [2024-11-19 12:28:35.800565] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:30.690 [2024-11-19 12:28:35.800596] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a47a0 (9): Bad file descriptor 00:08:30.690 [2024-11-19 12:28:35.804922] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:30.690 12:28:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.690 12:28:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:31.628 12:28:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 75369 00:08:31.628 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (75369) - No such process 00:08:31.628 12:28:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:31.628 12:28:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:31.628 12:28:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:31.628 12:28:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:31.628 12:28:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:08:31.628 12:28:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:08:31.628 12:28:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:31.628 12:28:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:31.628 { 00:08:31.628 "params": { 00:08:31.628 "name": "Nvme$subsystem", 00:08:31.628 "trtype": "$TEST_TRANSPORT", 00:08:31.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:31.628 "adrfam": "ipv4", 00:08:31.628 "trsvcid": "$NVMF_PORT", 00:08:31.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:31.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:31.628 "hdgst": ${hdgst:-false}, 00:08:31.628 "ddgst": ${ddgst:-false} 00:08:31.628 }, 00:08:31.628 "method": "bdev_nvme_attach_controller" 00:08:31.628 } 00:08:31.628 EOF 00:08:31.628 )") 00:08:31.628 12:28:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:08:31.628 12:28:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:08:31.628 12:28:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:08:31.628 12:28:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:31.628 "params": { 00:08:31.628 "name": "Nvme0", 00:08:31.628 "trtype": "tcp", 00:08:31.628 "traddr": "10.0.0.3", 00:08:31.628 "adrfam": "ipv4", 00:08:31.628 "trsvcid": "4420", 00:08:31.628 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:31.628 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:31.628 "hdgst": false, 00:08:31.628 "ddgst": false 00:08:31.628 }, 00:08:31.628 "method": "bdev_nvme_attach_controller" 00:08:31.628 }' 00:08:31.628 [2024-11-19 12:28:36.865716] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:31.628 [2024-11-19 12:28:36.865812] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75409 ] 00:08:31.887 [2024-11-19 12:28:37.000157] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.887 [2024-11-19 12:28:37.042811] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.887 [2024-11-19 12:28:37.079790] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:32.147 Running I/O for 1 seconds... 00:08:33.085 1600.00 IOPS, 100.00 MiB/s 00:08:33.085 Latency(us) 00:08:33.085 [2024-11-19T12:28:38.345Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:33.085 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:33.085 Verification LBA range: start 0x0 length 0x400 00:08:33.085 Nvme0n1 : 1.01 1655.23 103.45 0.00 0.00 37934.11 3798.11 34078.72 00:08:33.085 [2024-11-19T12:28:38.345Z] =================================================================================================================== 00:08:33.085 [2024-11-19T12:28:38.345Z] Total : 1655.23 103.45 0.00 0.00 37934.11 3798.11 34078.72 00:08:33.085 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:33.085 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:33.085 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:08:33.085 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:33.085 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:33.085 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:33.085 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:33.345 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:33.345 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:33.345 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:33.345 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:33.345 rmmod nvme_tcp 00:08:33.345 rmmod nvme_fabrics 00:08:33.345 rmmod nvme_keyring 00:08:33.345 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:33.345 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:33.345 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:33.345 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@513 -- # '[' -n 75315 ']' 00:08:33.345 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # killprocess 75315 00:08:33.345 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 75315 ']' 00:08:33.345 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 75315 00:08:33.345 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:08:33.345 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:33.345 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75315 00:08:33.345 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:33.345 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:33.345 killing process with pid 75315 00:08:33.345 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75315' 00:08:33.345 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 75315 00:08:33.345 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 75315 00:08:33.345 [2024-11-19 12:28:38.591697] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:33.605 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:33.605 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:33.605 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:33.605 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:33.605 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:33.605 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-save 00:08:33.605 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-restore 00:08:33.605 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:33.605 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:33.605 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:33.605 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:33.605 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:33.605 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:33.605 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:33.605 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:33.605 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:33.605 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:33.605 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:33.605 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:33.605 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:33.605 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:33.605 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:33.605 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:33.605 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.605 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:33.605 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.605 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:08:33.605 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:33.605 00:08:33.605 real 0m5.795s 00:08:33.605 user 0m20.935s 00:08:33.605 sys 0m1.358s 00:08:33.605 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:33.605 ************************************ 00:08:33.605 END TEST nvmf_host_management 00:08:33.605 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:33.605 ************************************ 00:08:33.866 12:28:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:33.866 12:28:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:33.866 12:28:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:33.866 12:28:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:33.866 ************************************ 00:08:33.866 START TEST nvmf_lvol 00:08:33.866 ************************************ 00:08:33.866 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:33.866 * Looking for test storage... 00:08:33.866 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:33.866 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:33.866 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:33.866 12:28:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:33.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.866 --rc genhtml_branch_coverage=1 00:08:33.866 --rc genhtml_function_coverage=1 00:08:33.866 --rc genhtml_legend=1 00:08:33.866 --rc geninfo_all_blocks=1 00:08:33.866 --rc geninfo_unexecuted_blocks=1 00:08:33.866 00:08:33.866 ' 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:33.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.866 --rc genhtml_branch_coverage=1 00:08:33.866 --rc genhtml_function_coverage=1 00:08:33.866 --rc genhtml_legend=1 00:08:33.866 --rc geninfo_all_blocks=1 00:08:33.866 --rc geninfo_unexecuted_blocks=1 00:08:33.866 00:08:33.866 ' 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:33.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.866 --rc genhtml_branch_coverage=1 00:08:33.866 --rc genhtml_function_coverage=1 00:08:33.866 --rc genhtml_legend=1 00:08:33.866 --rc geninfo_all_blocks=1 00:08:33.866 --rc geninfo_unexecuted_blocks=1 00:08:33.866 00:08:33.866 ' 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:33.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.866 --rc genhtml_branch_coverage=1 00:08:33.866 --rc genhtml_function_coverage=1 00:08:33.866 --rc genhtml_legend=1 00:08:33.866 --rc geninfo_all_blocks=1 00:08:33.866 --rc geninfo_unexecuted_blocks=1 00:08:33.866 00:08:33.866 ' 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:33.866 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:33.867 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@456 -- # nvmf_veth_init 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:33.867 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:34.125 Cannot find device "nvmf_init_br" 00:08:34.125 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:08:34.125 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:34.125 Cannot find device "nvmf_init_br2" 00:08:34.125 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:08:34.125 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:34.125 Cannot find device "nvmf_tgt_br" 00:08:34.125 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:08:34.125 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:34.125 Cannot find device "nvmf_tgt_br2" 00:08:34.125 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:08:34.125 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:34.125 Cannot find device "nvmf_init_br" 00:08:34.125 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:08:34.125 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:34.125 Cannot find device "nvmf_init_br2" 00:08:34.125 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:08:34.125 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:34.125 Cannot find device "nvmf_tgt_br" 00:08:34.125 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:08:34.125 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:34.125 Cannot find device "nvmf_tgt_br2" 00:08:34.125 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:08:34.125 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:34.125 Cannot find device "nvmf_br" 00:08:34.125 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:08:34.125 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:34.125 Cannot find device "nvmf_init_if" 00:08:34.125 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:08:34.125 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:34.125 Cannot find device "nvmf_init_if2" 00:08:34.125 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:08:34.125 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:34.125 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:34.125 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:08:34.125 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:34.125 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:34.125 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:08:34.125 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:34.125 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:34.125 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:34.125 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:34.125 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:34.125 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:34.125 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:34.125 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:34.125 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:34.125 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:34.125 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:34.125 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:34.125 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:34.125 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:34.125 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:34.125 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:34.125 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:34.125 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:34.125 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:34.383 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:34.383 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:34.383 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:34.383 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:34.383 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:34.383 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:34.383 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:34.383 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:34.383 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:34.383 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:34.383 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:34.383 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:34.383 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:34.383 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:34.383 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:34.383 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:08:34.383 00:08:34.383 --- 10.0.0.3 ping statistics --- 00:08:34.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.383 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:08:34.383 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:34.383 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:34.383 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.035 ms 00:08:34.383 00:08:34.383 --- 10.0.0.4 ping statistics --- 00:08:34.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.383 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:08:34.383 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:34.383 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:34.383 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:08:34.383 00:08:34.383 --- 10.0.0.1 ping statistics --- 00:08:34.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.383 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:08:34.383 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:34.384 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:34.384 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:08:34.384 00:08:34.384 --- 10.0.0.2 ping statistics --- 00:08:34.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.384 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:08:34.384 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:34.384 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@457 -- # return 0 00:08:34.384 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:34.384 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:34.384 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:34.384 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:34.384 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:34.384 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:34.384 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:34.384 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:34.384 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:34.384 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:34.384 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:34.384 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # nvmfpid=75684 00:08:34.384 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # waitforlisten 75684 00:08:34.384 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:34.384 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 75684 ']' 00:08:34.384 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.384 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:34.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.384 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.384 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:34.384 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:34.384 [2024-11-19 12:28:39.568029] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:34.384 [2024-11-19 12:28:39.568108] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:34.643 [2024-11-19 12:28:39.698092] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:34.643 [2024-11-19 12:28:39.733059] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:34.643 [2024-11-19 12:28:39.733119] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:34.643 [2024-11-19 12:28:39.733128] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:34.643 [2024-11-19 12:28:39.733135] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:34.643 [2024-11-19 12:28:39.733141] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:34.643 [2024-11-19 12:28:39.733273] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.643 [2024-11-19 12:28:39.734303] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:34.643 [2024-11-19 12:28:39.734372] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.643 [2024-11-19 12:28:39.763456] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:34.643 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:34.643 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:08:34.643 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:34.643 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:34.643 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:34.643 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:34.643 12:28:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:35.212 [2024-11-19 12:28:40.167737] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:35.212 12:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:35.472 12:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:35.472 12:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:35.732 12:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:35.732 12:28:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:35.991 12:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:36.251 12:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=85731bc9-e8fb-4deb-a579-424b038a037e 00:08:36.251 12:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 85731bc9-e8fb-4deb-a579-424b038a037e lvol 20 00:08:36.510 12:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=2d6d2789-7b5f-465c-b073-ad8c846d0c03 00:08:36.510 12:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:36.770 12:28:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2d6d2789-7b5f-465c-b073-ad8c846d0c03 00:08:37.029 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:37.289 [2024-11-19 12:28:42.363308] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:37.289 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:37.548 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=75752 00:08:37.548 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:37.548 12:28:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:38.486 12:28:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 2d6d2789-7b5f-465c-b073-ad8c846d0c03 MY_SNAPSHOT 00:08:38.746 12:28:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=16556c49-d072-4c75-9c93-dc73ec2c3aa6 00:08:38.746 12:28:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 2d6d2789-7b5f-465c-b073-ad8c846d0c03 30 00:08:39.316 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 16556c49-d072-4c75-9c93-dc73ec2c3aa6 MY_CLONE 00:08:39.575 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=05304140-01ed-4b0f-a959-99d1e479a8bc 00:08:39.575 12:28:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 05304140-01ed-4b0f-a959-99d1e479a8bc 00:08:39.835 12:28:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 75752 00:08:47.954 Initializing NVMe Controllers 00:08:47.954 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:08:47.954 Controller IO queue size 128, less than required. 00:08:47.954 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:47.954 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:47.954 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:47.954 Initialization complete. Launching workers. 00:08:47.954 ======================================================== 00:08:47.954 Latency(us) 00:08:47.954 Device Information : IOPS MiB/s Average min max 00:08:47.954 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10920.52 42.66 11731.86 1684.25 63518.22 00:08:47.954 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10844.02 42.36 11812.74 2571.89 55246.67 00:08:47.954 ======================================================== 00:08:47.954 Total : 21764.53 85.02 11772.16 1684.25 63518.22 00:08:47.954 00:08:47.954 12:28:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:48.213 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 2d6d2789-7b5f-465c-b073-ad8c846d0c03 00:08:48.472 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 85731bc9-e8fb-4deb-a579-424b038a037e 00:08:48.731 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:48.731 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:48.731 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:48.731 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:48.731 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:48.731 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:48.731 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:48.731 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:48.731 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:48.731 rmmod nvme_tcp 00:08:48.731 rmmod nvme_fabrics 00:08:48.731 rmmod nvme_keyring 00:08:48.731 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:48.731 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:48.731 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:48.731 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@513 -- # '[' -n 75684 ']' 00:08:48.731 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # killprocess 75684 00:08:48.731 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 75684 ']' 00:08:48.731 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 75684 00:08:48.731 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:08:48.731 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:48.731 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75684 00:08:48.732 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:48.732 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:48.732 killing process with pid 75684 00:08:48.732 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75684' 00:08:48.732 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 75684 00:08:48.732 12:28:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 75684 00:08:48.990 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:48.990 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:48.990 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:48.990 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:48.990 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-save 00:08:48.990 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-restore 00:08:48.990 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:48.990 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:48.990 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:48.990 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:48.990 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:48.990 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:48.990 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:48.990 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:48.990 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:48.990 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:48.990 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:48.990 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:48.991 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:48.991 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:49.249 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:49.249 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:49.250 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:49.250 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.250 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:49.250 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.250 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:08:49.250 00:08:49.250 real 0m15.431s 00:08:49.250 user 1m4.039s 00:08:49.250 sys 0m4.133s 00:08:49.250 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:49.250 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:49.250 ************************************ 00:08:49.250 END TEST nvmf_lvol 00:08:49.250 ************************************ 00:08:49.250 12:28:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:49.250 12:28:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:49.250 12:28:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:49.250 12:28:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:49.250 ************************************ 00:08:49.250 START TEST nvmf_lvs_grow 00:08:49.250 ************************************ 00:08:49.250 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:49.250 * Looking for test storage... 00:08:49.250 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:49.250 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:49.250 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:08:49.250 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:49.509 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:49.509 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:49.509 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:49.509 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:49.509 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:49.509 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:49.509 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:49.509 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:49.509 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:49.509 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:49.509 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:49.509 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:49.509 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:49.509 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:49.509 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:49.509 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:49.509 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:49.509 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:49.509 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:49.509 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:49.509 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:49.509 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:49.509 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:49.509 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:49.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.510 --rc genhtml_branch_coverage=1 00:08:49.510 --rc genhtml_function_coverage=1 00:08:49.510 --rc genhtml_legend=1 00:08:49.510 --rc geninfo_all_blocks=1 00:08:49.510 --rc geninfo_unexecuted_blocks=1 00:08:49.510 00:08:49.510 ' 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:49.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.510 --rc genhtml_branch_coverage=1 00:08:49.510 --rc genhtml_function_coverage=1 00:08:49.510 --rc genhtml_legend=1 00:08:49.510 --rc geninfo_all_blocks=1 00:08:49.510 --rc geninfo_unexecuted_blocks=1 00:08:49.510 00:08:49.510 ' 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:49.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.510 --rc genhtml_branch_coverage=1 00:08:49.510 --rc genhtml_function_coverage=1 00:08:49.510 --rc genhtml_legend=1 00:08:49.510 --rc geninfo_all_blocks=1 00:08:49.510 --rc geninfo_unexecuted_blocks=1 00:08:49.510 00:08:49.510 ' 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:49.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.510 --rc genhtml_branch_coverage=1 00:08:49.510 --rc genhtml_function_coverage=1 00:08:49.510 --rc genhtml_legend=1 00:08:49.510 --rc geninfo_all_blocks=1 00:08:49.510 --rc geninfo_unexecuted_blocks=1 00:08:49.510 00:08:49.510 ' 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:49.510 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@456 -- # nvmf_veth_init 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:49.510 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:49.511 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:49.511 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:49.511 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:49.511 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:49.511 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:49.511 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:49.511 Cannot find device "nvmf_init_br" 00:08:49.511 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:08:49.511 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:49.511 Cannot find device "nvmf_init_br2" 00:08:49.511 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:08:49.511 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:49.511 Cannot find device "nvmf_tgt_br" 00:08:49.511 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:08:49.511 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:49.511 Cannot find device "nvmf_tgt_br2" 00:08:49.511 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:08:49.511 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:49.511 Cannot find device "nvmf_init_br" 00:08:49.511 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:08:49.511 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:49.511 Cannot find device "nvmf_init_br2" 00:08:49.511 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:08:49.511 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:49.511 Cannot find device "nvmf_tgt_br" 00:08:49.511 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:08:49.511 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:49.511 Cannot find device "nvmf_tgt_br2" 00:08:49.511 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:08:49.511 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:49.511 Cannot find device "nvmf_br" 00:08:49.511 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:08:49.511 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:49.511 Cannot find device "nvmf_init_if" 00:08:49.511 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:08:49.511 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:49.511 Cannot find device "nvmf_init_if2" 00:08:49.511 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:08:49.511 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:49.511 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:49.511 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:08:49.511 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:49.511 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:49.511 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:08:49.511 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:49.511 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:49.511 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:49.511 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:49.786 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:49.786 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:49.786 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:49.786 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:49.786 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:49.786 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:49.786 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:49.786 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:49.786 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:49.786 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:49.786 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:49.786 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:49.786 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:49.786 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:49.786 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:49.786 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:49.786 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:49.786 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:49.786 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:49.786 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:49.786 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:49.786 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:49.786 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:49.786 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:49.786 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:49.786 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:49.786 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:49.786 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:49.786 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:49.786 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:49.786 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:08:49.786 00:08:49.786 --- 10.0.0.3 ping statistics --- 00:08:49.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.786 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:08:49.786 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:49.786 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:49.786 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:08:49.786 00:08:49.786 --- 10.0.0.4 ping statistics --- 00:08:49.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.786 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:08:49.786 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:49.786 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:49.786 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:08:49.786 00:08:49.786 --- 10.0.0.1 ping statistics --- 00:08:49.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.786 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:08:49.786 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:49.786 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:49.786 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:08:49.786 00:08:49.786 --- 10.0.0.2 ping statistics --- 00:08:49.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.786 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:08:49.786 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:49.786 12:28:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@457 -- # return 0 00:08:49.786 12:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:49.786 12:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:49.786 12:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:49.786 12:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:49.786 12:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:49.786 12:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:49.786 12:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:49.786 12:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:49.786 12:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:49.786 12:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:49.786 12:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:50.088 12:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # nvmfpid=76124 00:08:50.088 12:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # waitforlisten 76124 00:08:50.088 12:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 76124 ']' 00:08:50.088 12:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:50.088 12:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.088 12:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:50.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.088 12:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.088 12:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:50.088 12:28:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:50.088 [2024-11-19 12:28:55.092299] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:50.088 [2024-11-19 12:28:55.092398] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:50.088 [2024-11-19 12:28:55.234734] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.088 [2024-11-19 12:28:55.276118] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:50.088 [2024-11-19 12:28:55.276184] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:50.088 [2024-11-19 12:28:55.276197] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:50.088 [2024-11-19 12:28:55.276207] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:50.088 [2024-11-19 12:28:55.276216] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:50.088 [2024-11-19 12:28:55.276247] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.088 [2024-11-19 12:28:55.309895] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:51.032 12:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:51.032 12:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:08:51.032 12:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:51.032 12:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:51.032 12:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:51.032 12:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:51.032 12:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:51.291 [2024-11-19 12:28:56.357670] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:51.291 12:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:51.291 12:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:51.291 12:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:51.291 12:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:51.291 ************************************ 00:08:51.291 START TEST lvs_grow_clean 00:08:51.291 ************************************ 00:08:51.291 12:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:08:51.291 12:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:51.291 12:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:51.291 12:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:51.291 12:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:51.291 12:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:51.291 12:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:51.291 12:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:51.291 12:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:51.291 12:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:51.550 12:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:51.550 12:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:51.808 12:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=33e227e4-b2d8-411a-9741-cfce0f3e690f 00:08:51.808 12:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33e227e4-b2d8-411a-9741-cfce0f3e690f 00:08:51.808 12:28:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:52.066 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:52.066 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:52.066 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 33e227e4-b2d8-411a-9741-cfce0f3e690f lvol 150 00:08:52.340 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=273513b7-35a8-4c06-a8ca-37803c98566d 00:08:52.340 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:52.340 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:52.598 [2024-11-19 12:28:57.600383] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:52.598 [2024-11-19 12:28:57.600464] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:52.598 true 00:08:52.598 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:52.598 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33e227e4-b2d8-411a-9741-cfce0f3e690f 00:08:52.857 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:52.857 12:28:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:53.115 12:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 273513b7-35a8-4c06-a8ca-37803c98566d 00:08:53.373 12:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:53.631 [2024-11-19 12:28:58.688988] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:53.631 12:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:53.890 12:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=76212 00:08:53.890 12:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:53.890 12:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:53.890 12:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 76212 /var/tmp/bdevperf.sock 00:08:53.890 12:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 76212 ']' 00:08:53.890 12:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:53.890 12:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:53.890 12:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:53.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:53.890 12:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:53.890 12:28:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:53.890 [2024-11-19 12:28:59.053477] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:53.890 [2024-11-19 12:28:59.053583] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76212 ] 00:08:54.147 [2024-11-19 12:28:59.191971] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.147 [2024-11-19 12:28:59.233164] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.147 [2024-11-19 12:28:59.265891] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:54.147 12:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:54.147 12:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:08:54.147 12:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:54.405 Nvme0n1 00:08:54.405 12:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:54.664 [ 00:08:54.664 { 00:08:54.664 "name": "Nvme0n1", 00:08:54.664 "aliases": [ 00:08:54.664 "273513b7-35a8-4c06-a8ca-37803c98566d" 00:08:54.664 ], 00:08:54.664 "product_name": "NVMe disk", 00:08:54.664 "block_size": 4096, 00:08:54.664 "num_blocks": 38912, 00:08:54.664 "uuid": "273513b7-35a8-4c06-a8ca-37803c98566d", 00:08:54.664 "numa_id": -1, 00:08:54.664 "assigned_rate_limits": { 00:08:54.664 "rw_ios_per_sec": 0, 00:08:54.664 "rw_mbytes_per_sec": 0, 00:08:54.664 "r_mbytes_per_sec": 0, 00:08:54.664 "w_mbytes_per_sec": 0 00:08:54.664 }, 00:08:54.664 "claimed": false, 00:08:54.664 "zoned": false, 00:08:54.664 "supported_io_types": { 00:08:54.664 "read": true, 00:08:54.664 "write": true, 00:08:54.664 "unmap": true, 00:08:54.664 "flush": true, 00:08:54.664 "reset": true, 00:08:54.664 "nvme_admin": true, 00:08:54.664 "nvme_io": true, 00:08:54.664 "nvme_io_md": false, 00:08:54.664 "write_zeroes": true, 00:08:54.664 "zcopy": false, 00:08:54.664 "get_zone_info": false, 00:08:54.664 "zone_management": false, 00:08:54.664 "zone_append": false, 00:08:54.664 "compare": true, 00:08:54.664 "compare_and_write": true, 00:08:54.664 "abort": true, 00:08:54.664 "seek_hole": false, 00:08:54.664 "seek_data": false, 00:08:54.664 "copy": true, 00:08:54.664 "nvme_iov_md": false 00:08:54.664 }, 00:08:54.664 "memory_domains": [ 00:08:54.664 { 00:08:54.664 "dma_device_id": "system", 00:08:54.664 "dma_device_type": 1 00:08:54.664 } 00:08:54.664 ], 00:08:54.664 "driver_specific": { 00:08:54.664 "nvme": [ 00:08:54.664 { 00:08:54.664 "trid": { 00:08:54.664 "trtype": "TCP", 00:08:54.664 "adrfam": "IPv4", 00:08:54.664 "traddr": "10.0.0.3", 00:08:54.664 "trsvcid": "4420", 00:08:54.664 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:54.664 }, 00:08:54.664 "ctrlr_data": { 00:08:54.664 "cntlid": 1, 00:08:54.664 "vendor_id": "0x8086", 00:08:54.664 "model_number": "SPDK bdev Controller", 00:08:54.664 "serial_number": "SPDK0", 00:08:54.664 "firmware_revision": "24.09.1", 00:08:54.664 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:54.664 "oacs": { 00:08:54.664 "security": 0, 00:08:54.664 "format": 0, 00:08:54.664 "firmware": 0, 00:08:54.664 "ns_manage": 0 00:08:54.664 }, 00:08:54.664 "multi_ctrlr": true, 00:08:54.664 "ana_reporting": false 00:08:54.664 }, 00:08:54.664 "vs": { 00:08:54.664 "nvme_version": "1.3" 00:08:54.664 }, 00:08:54.664 "ns_data": { 00:08:54.664 "id": 1, 00:08:54.664 "can_share": true 00:08:54.664 } 00:08:54.664 } 00:08:54.664 ], 00:08:54.664 "mp_policy": "active_passive" 00:08:54.664 } 00:08:54.664 } 00:08:54.664 ] 00:08:54.664 12:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=76228 00:08:54.664 12:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:54.664 12:28:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:54.923 Running I/O for 10 seconds... 00:08:55.858 Latency(us) 00:08:55.858 [2024-11-19T12:29:01.118Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:55.858 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.858 Nvme0n1 : 1.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:08:55.858 [2024-11-19T12:29:01.118Z] =================================================================================================================== 00:08:55.858 [2024-11-19T12:29:01.119Z] Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:08:55.859 00:08:56.794 12:29:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 33e227e4-b2d8-411a-9741-cfce0f3e690f 00:08:57.053 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.053 Nvme0n1 : 2.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:08:57.053 [2024-11-19T12:29:02.313Z] =================================================================================================================== 00:08:57.053 [2024-11-19T12:29:02.313Z] Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:08:57.053 00:08:57.053 true 00:08:57.053 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33e227e4-b2d8-411a-9741-cfce0f3e690f 00:08:57.053 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:57.621 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:57.621 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:57.621 12:29:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 76228 00:08:57.879 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.879 Nvme0n1 : 3.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:08:57.879 [2024-11-19T12:29:03.139Z] =================================================================================================================== 00:08:57.879 [2024-11-19T12:29:03.139Z] Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:08:57.879 00:08:58.816 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.816 Nvme0n1 : 4.00 6445.25 25.18 0.00 0.00 0.00 0.00 0.00 00:08:58.816 [2024-11-19T12:29:04.076Z] =================================================================================================================== 00:08:58.816 [2024-11-19T12:29:04.076Z] Total : 6445.25 25.18 0.00 0.00 0.00 0.00 0.00 00:08:58.816 00:09:00.192 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:00.192 Nvme0n1 : 5.00 6426.20 25.10 0.00 0.00 0.00 0.00 0.00 00:09:00.192 [2024-11-19T12:29:05.452Z] =================================================================================================================== 00:09:00.192 [2024-11-19T12:29:05.452Z] Total : 6426.20 25.10 0.00 0.00 0.00 0.00 0.00 00:09:00.192 00:09:01.124 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.124 Nvme0n1 : 6.00 6413.50 25.05 0.00 0.00 0.00 0.00 0.00 00:09:01.124 [2024-11-19T12:29:06.384Z] =================================================================================================================== 00:09:01.124 [2024-11-19T12:29:06.384Z] Total : 6413.50 25.05 0.00 0.00 0.00 0.00 0.00 00:09:01.124 00:09:02.058 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.058 Nvme0n1 : 7.00 6404.43 25.02 0.00 0.00 0.00 0.00 0.00 00:09:02.058 [2024-11-19T12:29:07.318Z] =================================================================================================================== 00:09:02.058 [2024-11-19T12:29:07.318Z] Total : 6404.43 25.02 0.00 0.00 0.00 0.00 0.00 00:09:02.058 00:09:02.994 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.994 Nvme0n1 : 8.00 6397.62 24.99 0.00 0.00 0.00 0.00 0.00 00:09:02.994 [2024-11-19T12:29:08.254Z] =================================================================================================================== 00:09:02.994 [2024-11-19T12:29:08.254Z] Total : 6397.62 24.99 0.00 0.00 0.00 0.00 0.00 00:09:02.994 00:09:03.930 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:03.930 Nvme0n1 : 9.00 6378.22 24.91 0.00 0.00 0.00 0.00 0.00 00:09:03.930 [2024-11-19T12:29:09.190Z] =================================================================================================================== 00:09:03.930 [2024-11-19T12:29:09.190Z] Total : 6378.22 24.91 0.00 0.00 0.00 0.00 0.00 00:09:03.930 00:09:04.864 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.864 Nvme0n1 : 10.00 6337.30 24.76 0.00 0.00 0.00 0.00 0.00 00:09:04.864 [2024-11-19T12:29:10.124Z] =================================================================================================================== 00:09:04.864 [2024-11-19T12:29:10.124Z] Total : 6337.30 24.76 0.00 0.00 0.00 0.00 0.00 00:09:04.864 00:09:04.865 00:09:04.865 Latency(us) 00:09:04.865 [2024-11-19T12:29:10.125Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:04.865 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.865 Nvme0n1 : 10.01 6341.93 24.77 0.00 0.00 20178.36 16920.20 63867.81 00:09:04.865 [2024-11-19T12:29:10.125Z] =================================================================================================================== 00:09:04.865 [2024-11-19T12:29:10.125Z] Total : 6341.93 24.77 0.00 0.00 20178.36 16920.20 63867.81 00:09:04.865 { 00:09:04.865 "results": [ 00:09:04.865 { 00:09:04.865 "job": "Nvme0n1", 00:09:04.865 "core_mask": "0x2", 00:09:04.865 "workload": "randwrite", 00:09:04.865 "status": "finished", 00:09:04.865 "queue_depth": 128, 00:09:04.865 "io_size": 4096, 00:09:04.865 "runtime": 10.012882, 00:09:04.865 "iops": 6341.930325354878, 00:09:04.865 "mibps": 24.77316533341749, 00:09:04.865 "io_failed": 0, 00:09:04.865 "io_timeout": 0, 00:09:04.865 "avg_latency_us": 20178.362057576756, 00:09:04.865 "min_latency_us": 16920.203636363636, 00:09:04.865 "max_latency_us": 63867.810909090906 00:09:04.865 } 00:09:04.865 ], 00:09:04.865 "core_count": 1 00:09:04.865 } 00:09:04.865 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 76212 00:09:04.865 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 76212 ']' 00:09:04.865 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 76212 00:09:04.865 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:09:04.865 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:04.865 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76212 00:09:05.123 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:05.123 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:05.123 killing process with pid 76212 00:09:05.123 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76212' 00:09:05.123 Received shutdown signal, test time was about 10.000000 seconds 00:09:05.123 00:09:05.123 Latency(us) 00:09:05.123 [2024-11-19T12:29:10.383Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:05.123 [2024-11-19T12:29:10.383Z] =================================================================================================================== 00:09:05.123 [2024-11-19T12:29:10.383Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:05.123 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 76212 00:09:05.123 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 76212 00:09:05.123 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:05.381 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:05.948 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33e227e4-b2d8-411a-9741-cfce0f3e690f 00:09:05.948 12:29:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:05.948 12:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:05.948 12:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:05.948 12:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:06.207 [2024-11-19 12:29:11.336268] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:06.207 12:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33e227e4-b2d8-411a-9741-cfce0f3e690f 00:09:06.207 12:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:09:06.207 12:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33e227e4-b2d8-411a-9741-cfce0f3e690f 00:09:06.207 12:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:06.207 12:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:06.207 12:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:06.207 12:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:06.207 12:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:06.207 12:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:06.207 12:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:06.207 12:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:06.207 12:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33e227e4-b2d8-411a-9741-cfce0f3e690f 00:09:06.466 request: 00:09:06.466 { 00:09:06.466 "uuid": "33e227e4-b2d8-411a-9741-cfce0f3e690f", 00:09:06.466 "method": "bdev_lvol_get_lvstores", 00:09:06.466 "req_id": 1 00:09:06.466 } 00:09:06.466 Got JSON-RPC error response 00:09:06.466 response: 00:09:06.466 { 00:09:06.466 "code": -19, 00:09:06.466 "message": "No such device" 00:09:06.466 } 00:09:06.466 12:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:09:06.466 12:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:06.466 12:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:06.466 12:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:06.466 12:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:06.723 aio_bdev 00:09:06.982 12:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 273513b7-35a8-4c06-a8ca-37803c98566d 00:09:06.982 12:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=273513b7-35a8-4c06-a8ca-37803c98566d 00:09:06.982 12:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:06.982 12:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:09:06.982 12:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:06.982 12:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:06.982 12:29:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:06.982 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 273513b7-35a8-4c06-a8ca-37803c98566d -t 2000 00:09:07.240 [ 00:09:07.240 { 00:09:07.240 "name": "273513b7-35a8-4c06-a8ca-37803c98566d", 00:09:07.240 "aliases": [ 00:09:07.240 "lvs/lvol" 00:09:07.240 ], 00:09:07.240 "product_name": "Logical Volume", 00:09:07.240 "block_size": 4096, 00:09:07.240 "num_blocks": 38912, 00:09:07.240 "uuid": "273513b7-35a8-4c06-a8ca-37803c98566d", 00:09:07.240 "assigned_rate_limits": { 00:09:07.240 "rw_ios_per_sec": 0, 00:09:07.240 "rw_mbytes_per_sec": 0, 00:09:07.240 "r_mbytes_per_sec": 0, 00:09:07.240 "w_mbytes_per_sec": 0 00:09:07.240 }, 00:09:07.240 "claimed": false, 00:09:07.240 "zoned": false, 00:09:07.240 "supported_io_types": { 00:09:07.240 "read": true, 00:09:07.240 "write": true, 00:09:07.240 "unmap": true, 00:09:07.240 "flush": false, 00:09:07.240 "reset": true, 00:09:07.240 "nvme_admin": false, 00:09:07.241 "nvme_io": false, 00:09:07.241 "nvme_io_md": false, 00:09:07.241 "write_zeroes": true, 00:09:07.241 "zcopy": false, 00:09:07.241 "get_zone_info": false, 00:09:07.241 "zone_management": false, 00:09:07.241 "zone_append": false, 00:09:07.241 "compare": false, 00:09:07.241 "compare_and_write": false, 00:09:07.241 "abort": false, 00:09:07.241 "seek_hole": true, 00:09:07.241 "seek_data": true, 00:09:07.241 "copy": false, 00:09:07.241 "nvme_iov_md": false 00:09:07.241 }, 00:09:07.241 "driver_specific": { 00:09:07.241 "lvol": { 00:09:07.241 "lvol_store_uuid": "33e227e4-b2d8-411a-9741-cfce0f3e690f", 00:09:07.241 "base_bdev": "aio_bdev", 00:09:07.241 "thin_provision": false, 00:09:07.241 "num_allocated_clusters": 38, 00:09:07.241 "snapshot": false, 00:09:07.241 "clone": false, 00:09:07.241 "esnap_clone": false 00:09:07.241 } 00:09:07.241 } 00:09:07.241 } 00:09:07.241 ] 00:09:07.241 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:09:07.241 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33e227e4-b2d8-411a-9741-cfce0f3e690f 00:09:07.241 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:07.499 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:07.499 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33e227e4-b2d8-411a-9741-cfce0f3e690f 00:09:07.499 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:07.758 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:07.758 12:29:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 273513b7-35a8-4c06-a8ca-37803c98566d 00:09:08.325 12:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 33e227e4-b2d8-411a-9741-cfce0f3e690f 00:09:08.325 12:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:08.892 12:29:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:09.149 ************************************ 00:09:09.149 00:09:09.149 real 0m17.888s 00:09:09.149 user 0m16.856s 00:09:09.149 sys 0m2.475s 00:09:09.149 12:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:09.149 12:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:09.149 END TEST lvs_grow_clean 00:09:09.149 ************************************ 00:09:09.149 12:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:09.149 12:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:09.149 12:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:09.149 12:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:09.149 ************************************ 00:09:09.150 START TEST lvs_grow_dirty 00:09:09.150 ************************************ 00:09:09.150 12:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:09:09.150 12:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:09.150 12:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:09.150 12:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:09.150 12:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:09.150 12:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:09.150 12:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:09.150 12:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:09.150 12:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:09.150 12:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:09.407 12:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:09.407 12:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:10.022 12:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=ce370186-58d5-4aa1-8098-60ef700bb081 00:09:10.022 12:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce370186-58d5-4aa1-8098-60ef700bb081 00:09:10.022 12:29:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:10.022 12:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:10.022 12:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:10.022 12:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ce370186-58d5-4aa1-8098-60ef700bb081 lvol 150 00:09:10.281 12:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=4552159a-b227-4554-bff7-5c39e42d88f7 00:09:10.281 12:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:10.281 12:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:10.540 [2024-11-19 12:29:15.674416] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:10.540 [2024-11-19 12:29:15.674500] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:10.540 true 00:09:10.540 12:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce370186-58d5-4aa1-8098-60ef700bb081 00:09:10.540 12:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:10.799 12:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:10.799 12:29:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:11.058 12:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4552159a-b227-4554-bff7-5c39e42d88f7 00:09:11.316 12:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:11.575 [2024-11-19 12:29:16.783135] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:11.575 12:29:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:11.834 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=76482 00:09:11.834 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:11.834 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:11.834 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 76482 /var/tmp/bdevperf.sock 00:09:11.834 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 76482 ']' 00:09:11.834 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:11.834 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:11.834 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:11.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:11.834 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:11.834 12:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:12.092 [2024-11-19 12:29:17.091333] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:12.093 [2024-11-19 12:29:17.091640] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76482 ] 00:09:12.093 [2024-11-19 12:29:17.234014] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.093 [2024-11-19 12:29:17.276876] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.093 [2024-11-19 12:29:17.310813] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:13.029 12:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:13.029 12:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:13.029 12:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:13.288 Nvme0n1 00:09:13.288 12:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:13.547 [ 00:09:13.547 { 00:09:13.547 "name": "Nvme0n1", 00:09:13.547 "aliases": [ 00:09:13.547 "4552159a-b227-4554-bff7-5c39e42d88f7" 00:09:13.547 ], 00:09:13.547 "product_name": "NVMe disk", 00:09:13.547 "block_size": 4096, 00:09:13.547 "num_blocks": 38912, 00:09:13.547 "uuid": "4552159a-b227-4554-bff7-5c39e42d88f7", 00:09:13.547 "numa_id": -1, 00:09:13.547 "assigned_rate_limits": { 00:09:13.547 "rw_ios_per_sec": 0, 00:09:13.547 "rw_mbytes_per_sec": 0, 00:09:13.547 "r_mbytes_per_sec": 0, 00:09:13.547 "w_mbytes_per_sec": 0 00:09:13.547 }, 00:09:13.547 "claimed": false, 00:09:13.547 "zoned": false, 00:09:13.547 "supported_io_types": { 00:09:13.547 "read": true, 00:09:13.547 "write": true, 00:09:13.547 "unmap": true, 00:09:13.547 "flush": true, 00:09:13.547 "reset": true, 00:09:13.547 "nvme_admin": true, 00:09:13.547 "nvme_io": true, 00:09:13.547 "nvme_io_md": false, 00:09:13.547 "write_zeroes": true, 00:09:13.547 "zcopy": false, 00:09:13.547 "get_zone_info": false, 00:09:13.547 "zone_management": false, 00:09:13.547 "zone_append": false, 00:09:13.547 "compare": true, 00:09:13.547 "compare_and_write": true, 00:09:13.547 "abort": true, 00:09:13.547 "seek_hole": false, 00:09:13.547 "seek_data": false, 00:09:13.547 "copy": true, 00:09:13.547 "nvme_iov_md": false 00:09:13.547 }, 00:09:13.547 "memory_domains": [ 00:09:13.547 { 00:09:13.547 "dma_device_id": "system", 00:09:13.547 "dma_device_type": 1 00:09:13.547 } 00:09:13.547 ], 00:09:13.547 "driver_specific": { 00:09:13.547 "nvme": [ 00:09:13.547 { 00:09:13.547 "trid": { 00:09:13.547 "trtype": "TCP", 00:09:13.547 "adrfam": "IPv4", 00:09:13.547 "traddr": "10.0.0.3", 00:09:13.547 "trsvcid": "4420", 00:09:13.547 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:13.547 }, 00:09:13.547 "ctrlr_data": { 00:09:13.547 "cntlid": 1, 00:09:13.547 "vendor_id": "0x8086", 00:09:13.547 "model_number": "SPDK bdev Controller", 00:09:13.547 "serial_number": "SPDK0", 00:09:13.547 "firmware_revision": "24.09.1", 00:09:13.547 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:13.547 "oacs": { 00:09:13.547 "security": 0, 00:09:13.547 "format": 0, 00:09:13.547 "firmware": 0, 00:09:13.547 "ns_manage": 0 00:09:13.547 }, 00:09:13.547 "multi_ctrlr": true, 00:09:13.547 "ana_reporting": false 00:09:13.547 }, 00:09:13.547 "vs": { 00:09:13.547 "nvme_version": "1.3" 00:09:13.547 }, 00:09:13.547 "ns_data": { 00:09:13.547 "id": 1, 00:09:13.547 "can_share": true 00:09:13.547 } 00:09:13.547 } 00:09:13.547 ], 00:09:13.547 "mp_policy": "active_passive" 00:09:13.547 } 00:09:13.547 } 00:09:13.547 ] 00:09:13.547 12:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:13.547 12:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=76505 00:09:13.547 12:29:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:13.547 Running I/O for 10 seconds... 00:09:14.496 Latency(us) 00:09:14.496 [2024-11-19T12:29:19.756Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.496 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.496 Nvme0n1 : 1.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:14.496 [2024-11-19T12:29:19.756Z] =================================================================================================================== 00:09:14.496 [2024-11-19T12:29:19.756Z] Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:14.496 00:09:15.430 12:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ce370186-58d5-4aa1-8098-60ef700bb081 00:09:15.688 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:15.688 Nvme0n1 : 2.00 6413.50 25.05 0.00 0.00 0.00 0.00 0.00 00:09:15.688 [2024-11-19T12:29:20.948Z] =================================================================================================================== 00:09:15.688 [2024-11-19T12:29:20.948Z] Total : 6413.50 25.05 0.00 0.00 0.00 0.00 0.00 00:09:15.688 00:09:15.688 true 00:09:15.688 12:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce370186-58d5-4aa1-8098-60ef700bb081 00:09:15.688 12:29:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:15.946 12:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:15.946 12:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:15.946 12:29:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 76505 00:09:16.512 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:16.512 Nvme0n1 : 3.00 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:09:16.512 [2024-11-19T12:29:21.772Z] =================================================================================================================== 00:09:16.512 [2024-11-19T12:29:21.772Z] Total : 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:09:16.512 00:09:17.888 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:17.888 Nvme0n1 : 4.00 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:09:17.888 [2024-11-19T12:29:23.148Z] =================================================================================================================== 00:09:17.888 [2024-11-19T12:29:23.148Z] Total : 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:09:17.888 00:09:18.822 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:18.822 Nvme0n1 : 5.00 6324.60 24.71 0.00 0.00 0.00 0.00 0.00 00:09:18.822 [2024-11-19T12:29:24.082Z] =================================================================================================================== 00:09:18.822 [2024-11-19T12:29:24.082Z] Total : 6324.60 24.71 0.00 0.00 0.00 0.00 0.00 00:09:18.822 00:09:19.756 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.756 Nvme0n1 : 6.00 6328.83 24.72 0.00 0.00 0.00 0.00 0.00 00:09:19.756 [2024-11-19T12:29:25.016Z] =================================================================================================================== 00:09:19.756 [2024-11-19T12:29:25.016Z] Total : 6328.83 24.72 0.00 0.00 0.00 0.00 0.00 00:09:19.756 00:09:20.691 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.691 Nvme0n1 : 7.00 6216.14 24.28 0.00 0.00 0.00 0.00 0.00 00:09:20.691 [2024-11-19T12:29:25.951Z] =================================================================================================================== 00:09:20.691 [2024-11-19T12:29:25.951Z] Total : 6216.14 24.28 0.00 0.00 0.00 0.00 0.00 00:09:20.691 00:09:21.628 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.628 Nvme0n1 : 8.00 6185.25 24.16 0.00 0.00 0.00 0.00 0.00 00:09:21.628 [2024-11-19T12:29:26.888Z] =================================================================================================================== 00:09:21.628 [2024-11-19T12:29:26.888Z] Total : 6185.25 24.16 0.00 0.00 0.00 0.00 0.00 00:09:21.628 00:09:22.566 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.566 Nvme0n1 : 9.00 6161.22 24.07 0.00 0.00 0.00 0.00 0.00 00:09:22.566 [2024-11-19T12:29:27.826Z] =================================================================================================================== 00:09:22.566 [2024-11-19T12:29:27.826Z] Total : 6161.22 24.07 0.00 0.00 0.00 0.00 0.00 00:09:22.566 00:09:23.502 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.502 Nvme0n1 : 10.00 6154.70 24.04 0.00 0.00 0.00 0.00 0.00 00:09:23.502 [2024-11-19T12:29:28.762Z] =================================================================================================================== 00:09:23.502 [2024-11-19T12:29:28.762Z] Total : 6154.70 24.04 0.00 0.00 0.00 0.00 0.00 00:09:23.502 00:09:23.502 00:09:23.502 Latency(us) 00:09:23.502 [2024-11-19T12:29:28.762Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.502 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.502 Nvme0n1 : 10.03 6152.11 24.03 0.00 0.00 20799.35 12153.95 142987.64 00:09:23.502 [2024-11-19T12:29:28.762Z] =================================================================================================================== 00:09:23.502 [2024-11-19T12:29:28.762Z] Total : 6152.11 24.03 0.00 0.00 20799.35 12153.95 142987.64 00:09:23.502 { 00:09:23.502 "results": [ 00:09:23.502 { 00:09:23.502 "job": "Nvme0n1", 00:09:23.502 "core_mask": "0x2", 00:09:23.502 "workload": "randwrite", 00:09:23.502 "status": "finished", 00:09:23.502 "queue_depth": 128, 00:09:23.502 "io_size": 4096, 00:09:23.502 "runtime": 10.025013, 00:09:23.502 "iops": 6152.111722947392, 00:09:23.502 "mibps": 24.03168641776325, 00:09:23.502 "io_failed": 0, 00:09:23.502 "io_timeout": 0, 00:09:23.502 "avg_latency_us": 20799.349439068432, 00:09:23.502 "min_latency_us": 12153.949090909091, 00:09:23.502 "max_latency_us": 142987.63636363635 00:09:23.502 } 00:09:23.502 ], 00:09:23.502 "core_count": 1 00:09:23.502 } 00:09:23.762 12:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 76482 00:09:23.762 12:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 76482 ']' 00:09:23.762 12:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 76482 00:09:23.762 12:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:09:23.762 12:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:23.762 12:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76482 00:09:23.762 killing process with pid 76482 00:09:23.762 Received shutdown signal, test time was about 10.000000 seconds 00:09:23.762 00:09:23.762 Latency(us) 00:09:23.762 [2024-11-19T12:29:29.022Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.762 [2024-11-19T12:29:29.022Z] =================================================================================================================== 00:09:23.762 [2024-11-19T12:29:29.022Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:23.762 12:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:23.762 12:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:23.762 12:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76482' 00:09:23.762 12:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 76482 00:09:23.762 12:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 76482 00:09:23.762 12:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:24.022 12:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:24.616 12:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:24.616 12:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce370186-58d5-4aa1-8098-60ef700bb081 00:09:24.907 12:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:24.907 12:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:24.907 12:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 76124 00:09:24.907 12:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 76124 00:09:24.907 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 76124 Killed "${NVMF_APP[@]}" "$@" 00:09:24.907 12:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:24.907 12:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:24.907 12:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:24.907 12:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:24.907 12:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:24.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.907 12:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # nvmfpid=76638 00:09:24.907 12:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # waitforlisten 76638 00:09:24.907 12:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:24.907 12:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 76638 ']' 00:09:24.907 12:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.907 12:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:24.908 12:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.908 12:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:24.908 12:29:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:24.908 [2024-11-19 12:29:29.972618] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:24.908 [2024-11-19 12:29:29.973531] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.908 [2024-11-19 12:29:30.113675] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.908 [2024-11-19 12:29:30.153952] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:24.908 [2024-11-19 12:29:30.154001] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:24.908 [2024-11-19 12:29:30.154013] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:24.908 [2024-11-19 12:29:30.154021] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:24.908 [2024-11-19 12:29:30.154044] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:24.908 [2024-11-19 12:29:30.154085] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.167 [2024-11-19 12:29:30.188636] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:25.167 12:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:25.167 12:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:25.167 12:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:25.167 12:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:25.167 12:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:25.167 12:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:25.167 12:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:25.426 [2024-11-19 12:29:30.581836] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:25.426 [2024-11-19 12:29:30.582356] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:25.426 [2024-11-19 12:29:30.582761] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:25.426 12:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:25.426 12:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 4552159a-b227-4554-bff7-5c39e42d88f7 00:09:25.426 12:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=4552159a-b227-4554-bff7-5c39e42d88f7 00:09:25.426 12:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:25.426 12:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:25.426 12:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:25.426 12:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:25.426 12:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:25.994 12:29:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4552159a-b227-4554-bff7-5c39e42d88f7 -t 2000 00:09:26.253 [ 00:09:26.253 { 00:09:26.253 "name": "4552159a-b227-4554-bff7-5c39e42d88f7", 00:09:26.253 "aliases": [ 00:09:26.253 "lvs/lvol" 00:09:26.253 ], 00:09:26.253 "product_name": "Logical Volume", 00:09:26.253 "block_size": 4096, 00:09:26.253 "num_blocks": 38912, 00:09:26.253 "uuid": "4552159a-b227-4554-bff7-5c39e42d88f7", 00:09:26.253 "assigned_rate_limits": { 00:09:26.253 "rw_ios_per_sec": 0, 00:09:26.253 "rw_mbytes_per_sec": 0, 00:09:26.253 "r_mbytes_per_sec": 0, 00:09:26.253 "w_mbytes_per_sec": 0 00:09:26.253 }, 00:09:26.253 "claimed": false, 00:09:26.253 "zoned": false, 00:09:26.253 "supported_io_types": { 00:09:26.253 "read": true, 00:09:26.253 "write": true, 00:09:26.253 "unmap": true, 00:09:26.253 "flush": false, 00:09:26.253 "reset": true, 00:09:26.253 "nvme_admin": false, 00:09:26.253 "nvme_io": false, 00:09:26.253 "nvme_io_md": false, 00:09:26.253 "write_zeroes": true, 00:09:26.253 "zcopy": false, 00:09:26.253 "get_zone_info": false, 00:09:26.253 "zone_management": false, 00:09:26.253 "zone_append": false, 00:09:26.253 "compare": false, 00:09:26.253 "compare_and_write": false, 00:09:26.253 "abort": false, 00:09:26.253 "seek_hole": true, 00:09:26.253 "seek_data": true, 00:09:26.253 "copy": false, 00:09:26.253 "nvme_iov_md": false 00:09:26.253 }, 00:09:26.253 "driver_specific": { 00:09:26.253 "lvol": { 00:09:26.253 "lvol_store_uuid": "ce370186-58d5-4aa1-8098-60ef700bb081", 00:09:26.253 "base_bdev": "aio_bdev", 00:09:26.253 "thin_provision": false, 00:09:26.253 "num_allocated_clusters": 38, 00:09:26.253 "snapshot": false, 00:09:26.253 "clone": false, 00:09:26.253 "esnap_clone": false 00:09:26.253 } 00:09:26.253 } 00:09:26.253 } 00:09:26.253 ] 00:09:26.253 12:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:26.253 12:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce370186-58d5-4aa1-8098-60ef700bb081 00:09:26.253 12:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:26.512 12:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:26.512 12:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce370186-58d5-4aa1-8098-60ef700bb081 00:09:26.512 12:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:26.772 12:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:26.772 12:29:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:27.033 [2024-11-19 12:29:32.075693] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:27.033 12:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce370186-58d5-4aa1-8098-60ef700bb081 00:09:27.033 12:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:09:27.033 12:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce370186-58d5-4aa1-8098-60ef700bb081 00:09:27.033 12:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:27.033 12:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:27.033 12:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:27.033 12:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:27.033 12:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:27.033 12:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:27.033 12:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:27.033 12:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:27.033 12:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce370186-58d5-4aa1-8098-60ef700bb081 00:09:27.294 request: 00:09:27.294 { 00:09:27.294 "uuid": "ce370186-58d5-4aa1-8098-60ef700bb081", 00:09:27.294 "method": "bdev_lvol_get_lvstores", 00:09:27.294 "req_id": 1 00:09:27.294 } 00:09:27.294 Got JSON-RPC error response 00:09:27.294 response: 00:09:27.294 { 00:09:27.294 "code": -19, 00:09:27.294 "message": "No such device" 00:09:27.294 } 00:09:27.294 12:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:09:27.294 12:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:27.294 12:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:27.294 12:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:27.294 12:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:27.554 aio_bdev 00:09:27.554 12:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4552159a-b227-4554-bff7-5c39e42d88f7 00:09:27.554 12:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=4552159a-b227-4554-bff7-5c39e42d88f7 00:09:27.554 12:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:27.554 12:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:27.554 12:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:27.554 12:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:27.554 12:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:27.814 12:29:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4552159a-b227-4554-bff7-5c39e42d88f7 -t 2000 00:09:28.074 [ 00:09:28.074 { 00:09:28.074 "name": "4552159a-b227-4554-bff7-5c39e42d88f7", 00:09:28.074 "aliases": [ 00:09:28.074 "lvs/lvol" 00:09:28.074 ], 00:09:28.074 "product_name": "Logical Volume", 00:09:28.074 "block_size": 4096, 00:09:28.074 "num_blocks": 38912, 00:09:28.074 "uuid": "4552159a-b227-4554-bff7-5c39e42d88f7", 00:09:28.074 "assigned_rate_limits": { 00:09:28.074 "rw_ios_per_sec": 0, 00:09:28.074 "rw_mbytes_per_sec": 0, 00:09:28.074 "r_mbytes_per_sec": 0, 00:09:28.074 "w_mbytes_per_sec": 0 00:09:28.074 }, 00:09:28.074 "claimed": false, 00:09:28.074 "zoned": false, 00:09:28.074 "supported_io_types": { 00:09:28.074 "read": true, 00:09:28.074 "write": true, 00:09:28.074 "unmap": true, 00:09:28.074 "flush": false, 00:09:28.074 "reset": true, 00:09:28.074 "nvme_admin": false, 00:09:28.074 "nvme_io": false, 00:09:28.074 "nvme_io_md": false, 00:09:28.074 "write_zeroes": true, 00:09:28.074 "zcopy": false, 00:09:28.074 "get_zone_info": false, 00:09:28.074 "zone_management": false, 00:09:28.074 "zone_append": false, 00:09:28.074 "compare": false, 00:09:28.074 "compare_and_write": false, 00:09:28.074 "abort": false, 00:09:28.074 "seek_hole": true, 00:09:28.074 "seek_data": true, 00:09:28.074 "copy": false, 00:09:28.074 "nvme_iov_md": false 00:09:28.074 }, 00:09:28.074 "driver_specific": { 00:09:28.074 "lvol": { 00:09:28.074 "lvol_store_uuid": "ce370186-58d5-4aa1-8098-60ef700bb081", 00:09:28.074 "base_bdev": "aio_bdev", 00:09:28.074 "thin_provision": false, 00:09:28.074 "num_allocated_clusters": 38, 00:09:28.074 "snapshot": false, 00:09:28.074 "clone": false, 00:09:28.074 "esnap_clone": false 00:09:28.074 } 00:09:28.074 } 00:09:28.074 } 00:09:28.074 ] 00:09:28.074 12:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:28.074 12:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce370186-58d5-4aa1-8098-60ef700bb081 00:09:28.074 12:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:28.335 12:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:28.335 12:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ce370186-58d5-4aa1-8098-60ef700bb081 00:09:28.335 12:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:28.594 12:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:28.594 12:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 4552159a-b227-4554-bff7-5c39e42d88f7 00:09:28.594 12:29:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ce370186-58d5-4aa1-8098-60ef700bb081 00:09:29.163 12:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:29.163 12:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:29.733 ************************************ 00:09:29.733 END TEST lvs_grow_dirty 00:09:29.733 ************************************ 00:09:29.733 00:09:29.733 real 0m20.434s 00:09:29.733 user 0m42.198s 00:09:29.733 sys 0m9.057s 00:09:29.733 12:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:29.733 12:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:29.733 12:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:29.733 12:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:09:29.733 12:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:09:29.733 12:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:09:29.733 12:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:29.733 12:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:09:29.733 12:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:09:29.733 12:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:09:29.733 12:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:29.733 nvmf_trace.0 00:09:29.733 12:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:09:29.733 12:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:29.733 12:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:29.733 12:29:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:30.335 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:30.335 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:30.335 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:30.335 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:30.335 rmmod nvme_tcp 00:09:30.336 rmmod nvme_fabrics 00:09:30.336 rmmod nvme_keyring 00:09:30.336 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:30.336 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:30.336 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:30.336 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@513 -- # '[' -n 76638 ']' 00:09:30.336 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # killprocess 76638 00:09:30.336 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 76638 ']' 00:09:30.336 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 76638 00:09:30.336 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:09:30.336 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:30.336 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76638 00:09:30.336 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:30.336 killing process with pid 76638 00:09:30.336 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:30.336 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76638' 00:09:30.336 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 76638 00:09:30.336 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 76638 00:09:30.336 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:30.336 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:30.336 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:30.336 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:30.336 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-save 00:09:30.336 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:30.336 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-restore 00:09:30.336 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:30.336 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:30.336 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:30.336 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:30.336 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:30.595 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:30.595 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:30.595 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:30.595 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:30.595 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:30.595 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:30.595 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:30.595 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:30.595 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:30.595 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:30.595 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:30.595 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.595 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:30.595 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.595 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:09:30.595 00:09:30.595 real 0m41.424s 00:09:30.595 user 1m5.426s 00:09:30.595 sys 0m12.585s 00:09:30.595 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:30.595 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:30.595 ************************************ 00:09:30.595 END TEST nvmf_lvs_grow 00:09:30.595 ************************************ 00:09:30.854 12:29:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:30.854 12:29:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:30.854 12:29:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:30.854 12:29:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:30.854 ************************************ 00:09:30.854 START TEST nvmf_bdev_io_wait 00:09:30.854 ************************************ 00:09:30.854 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:30.854 * Looking for test storage... 00:09:30.854 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:30.854 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:30.854 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:09:30.854 12:29:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:30.854 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:30.854 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:30.854 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:30.854 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:30.854 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:30.854 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:30.854 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:30.854 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:30.854 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:30.854 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:30.854 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:30.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.855 --rc genhtml_branch_coverage=1 00:09:30.855 --rc genhtml_function_coverage=1 00:09:30.855 --rc genhtml_legend=1 00:09:30.855 --rc geninfo_all_blocks=1 00:09:30.855 --rc geninfo_unexecuted_blocks=1 00:09:30.855 00:09:30.855 ' 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:30.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.855 --rc genhtml_branch_coverage=1 00:09:30.855 --rc genhtml_function_coverage=1 00:09:30.855 --rc genhtml_legend=1 00:09:30.855 --rc geninfo_all_blocks=1 00:09:30.855 --rc geninfo_unexecuted_blocks=1 00:09:30.855 00:09:30.855 ' 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:30.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.855 --rc genhtml_branch_coverage=1 00:09:30.855 --rc genhtml_function_coverage=1 00:09:30.855 --rc genhtml_legend=1 00:09:30.855 --rc geninfo_all_blocks=1 00:09:30.855 --rc geninfo_unexecuted_blocks=1 00:09:30.855 00:09:30.855 ' 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:30.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.855 --rc genhtml_branch_coverage=1 00:09:30.855 --rc genhtml_function_coverage=1 00:09:30.855 --rc genhtml_legend=1 00:09:30.855 --rc geninfo_all_blocks=1 00:09:30.855 --rc geninfo_unexecuted_blocks=1 00:09:30.855 00:09:30.855 ' 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.855 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:30.856 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.856 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:30.856 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:30.856 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:30.856 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:30.856 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:30.856 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:30.856 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:30.856 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:30.856 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:30.856 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:30.856 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:30.856 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:30.856 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:30.856 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:30.856 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:30.856 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:30.856 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:30.856 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:30.856 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:30.856 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.856 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:30.856 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.856 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:09:30.856 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:09:30.856 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:09:30.856 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:09:30.856 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:09:30.856 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # nvmf_veth_init 00:09:30.856 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:30.856 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:30.856 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:30.856 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:30.856 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:30.856 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:30.856 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:30.856 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:30.856 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:30.856 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:30.856 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:30.856 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:30.856 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:30.856 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:30.856 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:30.856 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:30.856 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:31.115 Cannot find device "nvmf_init_br" 00:09:31.115 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:09:31.116 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:31.116 Cannot find device "nvmf_init_br2" 00:09:31.116 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:09:31.116 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:31.116 Cannot find device "nvmf_tgt_br" 00:09:31.116 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:09:31.116 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:31.116 Cannot find device "nvmf_tgt_br2" 00:09:31.116 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:09:31.116 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:31.116 Cannot find device "nvmf_init_br" 00:09:31.116 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:09:31.116 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:31.116 Cannot find device "nvmf_init_br2" 00:09:31.116 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:09:31.116 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:31.116 Cannot find device "nvmf_tgt_br" 00:09:31.116 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:09:31.116 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:31.116 Cannot find device "nvmf_tgt_br2" 00:09:31.116 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:09:31.116 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:31.116 Cannot find device "nvmf_br" 00:09:31.116 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:09:31.116 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:31.116 Cannot find device "nvmf_init_if" 00:09:31.116 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:09:31.116 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:31.116 Cannot find device "nvmf_init_if2" 00:09:31.116 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:09:31.116 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:31.116 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:31.116 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:09:31.116 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:31.116 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:31.116 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:09:31.116 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:31.116 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:31.116 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:31.116 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:31.116 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:31.116 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:31.116 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:31.116 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:31.116 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:31.375 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:31.375 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:09:31.375 00:09:31.375 --- 10.0.0.3 ping statistics --- 00:09:31.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.375 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:31.375 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:31.375 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:09:31.375 00:09:31.375 --- 10.0.0.4 ping statistics --- 00:09:31.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.375 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:31.375 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:31.375 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:09:31.375 00:09:31.375 --- 10.0.0.1 ping statistics --- 00:09:31.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.375 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:31.375 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:31.375 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:09:31.375 00:09:31.375 --- 10.0.0.2 ping statistics --- 00:09:31.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.375 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # return 0 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # nvmfpid=77013 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # waitforlisten 77013 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 77013 ']' 00:09:31.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:31.375 12:29:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:31.635 [2024-11-19 12:29:36.647617] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:31.635 [2024-11-19 12:29:36.647919] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.635 [2024-11-19 12:29:36.791371] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:31.635 [2024-11-19 12:29:36.834612] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:31.635 [2024-11-19 12:29:36.834939] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:31.635 [2024-11-19 12:29:36.835094] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:31.635 [2024-11-19 12:29:36.835109] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:31.635 [2024-11-19 12:29:36.835118] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:31.635 [2024-11-19 12:29:36.835260] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.635 [2024-11-19 12:29:36.835409] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:31.635 [2024-11-19 12:29:36.835998] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:31.635 [2024-11-19 12:29:36.836010] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.570 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:32.570 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:09:32.570 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:32.570 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:32.570 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:32.570 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:32.570 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:32.570 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.570 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:32.570 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.570 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:32.570 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.570 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:32.570 [2024-11-19 12:29:37.682429] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:32.570 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.570 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:32.570 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.570 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:32.570 [2024-11-19 12:29:37.697160] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:32.570 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.570 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:32.571 Malloc0 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:32.571 [2024-11-19 12:29:37.753675] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=77048 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=77050 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:32.571 { 00:09:32.571 "params": { 00:09:32.571 "name": "Nvme$subsystem", 00:09:32.571 "trtype": "$TEST_TRANSPORT", 00:09:32.571 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:32.571 "adrfam": "ipv4", 00:09:32.571 "trsvcid": "$NVMF_PORT", 00:09:32.571 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:32.571 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:32.571 "hdgst": ${hdgst:-false}, 00:09:32.571 "ddgst": ${ddgst:-false} 00:09:32.571 }, 00:09:32.571 "method": "bdev_nvme_attach_controller" 00:09:32.571 } 00:09:32.571 EOF 00:09:32.571 )") 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=77052 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:32.571 { 00:09:32.571 "params": { 00:09:32.571 "name": "Nvme$subsystem", 00:09:32.571 "trtype": "$TEST_TRANSPORT", 00:09:32.571 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:32.571 "adrfam": "ipv4", 00:09:32.571 "trsvcid": "$NVMF_PORT", 00:09:32.571 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:32.571 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:32.571 "hdgst": ${hdgst:-false}, 00:09:32.571 "ddgst": ${ddgst:-false} 00:09:32.571 }, 00:09:32.571 "method": "bdev_nvme_attach_controller" 00:09:32.571 } 00:09:32.571 EOF 00:09:32.571 )") 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=77055 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:32.571 { 00:09:32.571 "params": { 00:09:32.571 "name": "Nvme$subsystem", 00:09:32.571 "trtype": "$TEST_TRANSPORT", 00:09:32.571 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:32.571 "adrfam": "ipv4", 00:09:32.571 "trsvcid": "$NVMF_PORT", 00:09:32.571 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:32.571 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:32.571 "hdgst": ${hdgst:-false}, 00:09:32.571 "ddgst": ${ddgst:-false} 00:09:32.571 }, 00:09:32.571 "method": "bdev_nvme_attach_controller" 00:09:32.571 } 00:09:32.571 EOF 00:09:32.571 )") 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:32.571 "params": { 00:09:32.571 "name": "Nvme1", 00:09:32.571 "trtype": "tcp", 00:09:32.571 "traddr": "10.0.0.3", 00:09:32.571 "adrfam": "ipv4", 00:09:32.571 "trsvcid": "4420", 00:09:32.571 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:32.571 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:32.571 "hdgst": false, 00:09:32.571 "ddgst": false 00:09:32.571 }, 00:09:32.571 "method": "bdev_nvme_attach_controller" 00:09:32.571 }' 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:32.571 { 00:09:32.571 "params": { 00:09:32.571 "name": "Nvme$subsystem", 00:09:32.571 "trtype": "$TEST_TRANSPORT", 00:09:32.571 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:32.571 "adrfam": "ipv4", 00:09:32.571 "trsvcid": "$NVMF_PORT", 00:09:32.571 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:32.571 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:32.571 "hdgst": ${hdgst:-false}, 00:09:32.571 "ddgst": ${ddgst:-false} 00:09:32.571 }, 00:09:32.571 "method": "bdev_nvme_attach_controller" 00:09:32.571 } 00:09:32.571 EOF 00:09:32.571 )") 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:32.571 "params": { 00:09:32.571 "name": "Nvme1", 00:09:32.571 "trtype": "tcp", 00:09:32.571 "traddr": "10.0.0.3", 00:09:32.571 "adrfam": "ipv4", 00:09:32.571 "trsvcid": "4420", 00:09:32.571 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:32.571 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:32.571 "hdgst": false, 00:09:32.571 "ddgst": false 00:09:32.571 }, 00:09:32.571 "method": "bdev_nvme_attach_controller" 00:09:32.571 }' 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:32.571 "params": { 00:09:32.571 "name": "Nvme1", 00:09:32.571 "trtype": "tcp", 00:09:32.571 "traddr": "10.0.0.3", 00:09:32.571 "adrfam": "ipv4", 00:09:32.571 "trsvcid": "4420", 00:09:32.571 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:32.571 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:32.571 "hdgst": false, 00:09:32.571 "ddgst": false 00:09:32.571 }, 00:09:32.571 "method": "bdev_nvme_attach_controller" 00:09:32.571 }' 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:32.571 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:32.571 "params": { 00:09:32.571 "name": "Nvme1", 00:09:32.572 "trtype": "tcp", 00:09:32.572 "traddr": "10.0.0.3", 00:09:32.572 "adrfam": "ipv4", 00:09:32.572 "trsvcid": "4420", 00:09:32.572 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:32.572 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:32.572 "hdgst": false, 00:09:32.572 "ddgst": false 00:09:32.572 }, 00:09:32.572 "method": "bdev_nvme_attach_controller" 00:09:32.572 }' 00:09:32.572 [2024-11-19 12:29:37.816154] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:32.572 [2024-11-19 12:29:37.816851] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:32.572 12:29:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 77048 00:09:32.831 [2024-11-19 12:29:37.829578] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:32.831 [2024-11-19 12:29:37.829834] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:32.831 [2024-11-19 12:29:37.841372] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:32.831 [2024-11-19 12:29:37.841453] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:32.831 [2024-11-19 12:29:37.857780] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:32.831 [2024-11-19 12:29:37.858110] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:32.831 [2024-11-19 12:29:37.997452] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.831 [2024-11-19 12:29:38.024523] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:09:32.831 [2024-11-19 12:29:38.031482] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.831 [2024-11-19 12:29:38.053977] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:09:32.831 [2024-11-19 12:29:38.056158] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:32.831 [2024-11-19 12:29:38.079508] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.090 [2024-11-19 12:29:38.103501] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:33.090 [2024-11-19 12:29:38.107461] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:09:33.090 [2024-11-19 12:29:38.126476] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.090 [2024-11-19 12:29:38.147390] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:33.090 [2024-11-19 12:29:38.154138] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:09:33.090 Running I/O for 1 seconds... 00:09:33.090 [2024-11-19 12:29:38.193059] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:33.090 Running I/O for 1 seconds... 00:09:33.090 Running I/O for 1 seconds... 00:09:33.090 Running I/O for 1 seconds... 00:09:34.027 165912.00 IOPS, 648.09 MiB/s 00:09:34.027 Latency(us) 00:09:34.027 [2024-11-19T12:29:39.287Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:34.027 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:34.027 Nvme1n1 : 1.00 165542.89 646.65 0.00 0.00 769.05 398.43 2204.39 00:09:34.027 [2024-11-19T12:29:39.287Z] =================================================================================================================== 00:09:34.027 [2024-11-19T12:29:39.287Z] Total : 165542.89 646.65 0.00 0.00 769.05 398.43 2204.39 00:09:34.027 9449.00 IOPS, 36.91 MiB/s 00:09:34.027 Latency(us) 00:09:34.027 [2024-11-19T12:29:39.287Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:34.027 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:34.027 Nvme1n1 : 1.01 9487.09 37.06 0.00 0.00 13427.41 8221.79 20375.74 00:09:34.027 [2024-11-19T12:29:39.287Z] =================================================================================================================== 00:09:34.027 [2024-11-19T12:29:39.287Z] Total : 9487.09 37.06 0.00 0.00 13427.41 8221.79 20375.74 00:09:34.027 8130.00 IOPS, 31.76 MiB/s 00:09:34.027 Latency(us) 00:09:34.027 [2024-11-19T12:29:39.287Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:34.027 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:34.027 Nvme1n1 : 1.01 8193.46 32.01 0.00 0.00 15546.16 5034.36 24188.74 00:09:34.027 [2024-11-19T12:29:39.287Z] =================================================================================================================== 00:09:34.027 [2024-11-19T12:29:39.287Z] Total : 8193.46 32.01 0.00 0.00 15546.16 5034.36 24188.74 00:09:34.286 8340.00 IOPS, 32.58 MiB/s 00:09:34.286 Latency(us) 00:09:34.286 [2024-11-19T12:29:39.546Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:34.286 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:34.286 Nvme1n1 : 1.01 8422.20 32.90 0.00 0.00 15133.14 6732.33 24665.37 00:09:34.286 [2024-11-19T12:29:39.546Z] =================================================================================================================== 00:09:34.286 [2024-11-19T12:29:39.546Z] Total : 8422.20 32.90 0.00 0.00 15133.14 6732.33 24665.37 00:09:34.286 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 77050 00:09:34.286 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 77052 00:09:34.286 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 77055 00:09:34.286 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:34.286 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.286 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.286 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.286 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:34.286 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:34.286 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:34.286 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:34.286 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:34.286 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:34.286 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:34.286 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:34.286 rmmod nvme_tcp 00:09:34.286 rmmod nvme_fabrics 00:09:34.286 rmmod nvme_keyring 00:09:34.545 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:34.545 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:34.545 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:34.545 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@513 -- # '[' -n 77013 ']' 00:09:34.545 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # killprocess 77013 00:09:34.545 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 77013 ']' 00:09:34.545 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 77013 00:09:34.545 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:09:34.545 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:34.545 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77013 00:09:34.545 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:34.545 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:34.545 killing process with pid 77013 00:09:34.545 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77013' 00:09:34.545 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 77013 00:09:34.545 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 77013 00:09:34.545 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:34.545 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:34.545 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:34.545 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:34.545 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-save 00:09:34.545 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:34.545 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-restore 00:09:34.545 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:34.545 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:34.545 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:34.545 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:34.545 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:34.545 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:34.545 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:34.545 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:34.545 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:34.804 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:34.804 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:34.804 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:34.804 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:34.804 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:34.804 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:34.804 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:34.804 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.804 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:34.804 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.804 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:09:34.804 00:09:34.804 real 0m4.115s 00:09:34.804 user 0m15.789s 00:09:34.804 sys 0m2.225s 00:09:34.804 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:34.804 12:29:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:34.804 ************************************ 00:09:34.804 END TEST nvmf_bdev_io_wait 00:09:34.804 ************************************ 00:09:34.804 12:29:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:34.804 12:29:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:34.804 12:29:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:34.804 12:29:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:34.804 ************************************ 00:09:34.804 START TEST nvmf_queue_depth 00:09:34.804 ************************************ 00:09:34.804 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:35.065 * Looking for test storage... 00:09:35.065 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:35.065 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:35.065 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:09:35.065 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:35.065 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:35.065 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:35.065 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:35.065 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:35.065 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:35.065 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:35.065 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:35.065 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:35.065 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:35.065 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:35.065 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:35.065 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:35.065 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:35.065 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:35.065 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:35.065 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:35.065 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:35.065 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:35.065 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:35.065 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:35.065 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:35.065 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:35.065 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:35.065 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:35.065 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:35.065 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:35.065 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:35.065 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:35.065 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:35.065 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:35.065 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:35.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.065 --rc genhtml_branch_coverage=1 00:09:35.065 --rc genhtml_function_coverage=1 00:09:35.065 --rc genhtml_legend=1 00:09:35.065 --rc geninfo_all_blocks=1 00:09:35.065 --rc geninfo_unexecuted_blocks=1 00:09:35.065 00:09:35.065 ' 00:09:35.065 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:35.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.065 --rc genhtml_branch_coverage=1 00:09:35.065 --rc genhtml_function_coverage=1 00:09:35.065 --rc genhtml_legend=1 00:09:35.065 --rc geninfo_all_blocks=1 00:09:35.065 --rc geninfo_unexecuted_blocks=1 00:09:35.065 00:09:35.065 ' 00:09:35.065 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:35.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.065 --rc genhtml_branch_coverage=1 00:09:35.066 --rc genhtml_function_coverage=1 00:09:35.066 --rc genhtml_legend=1 00:09:35.066 --rc geninfo_all_blocks=1 00:09:35.066 --rc geninfo_unexecuted_blocks=1 00:09:35.066 00:09:35.066 ' 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:35.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.066 --rc genhtml_branch_coverage=1 00:09:35.066 --rc genhtml_function_coverage=1 00:09:35.066 --rc genhtml_legend=1 00:09:35.066 --rc geninfo_all_blocks=1 00:09:35.066 --rc geninfo_unexecuted_blocks=1 00:09:35.066 00:09:35.066 ' 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:35.066 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@456 -- # nvmf_veth_init 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:35.066 Cannot find device "nvmf_init_br" 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:35.066 Cannot find device "nvmf_init_br2" 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:35.066 Cannot find device "nvmf_tgt_br" 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:35.066 Cannot find device "nvmf_tgt_br2" 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:09:35.066 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:35.066 Cannot find device "nvmf_init_br" 00:09:35.067 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:09:35.067 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:35.067 Cannot find device "nvmf_init_br2" 00:09:35.067 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:09:35.067 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:35.067 Cannot find device "nvmf_tgt_br" 00:09:35.067 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:09:35.067 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:35.326 Cannot find device "nvmf_tgt_br2" 00:09:35.326 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:09:35.326 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:35.326 Cannot find device "nvmf_br" 00:09:35.326 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:09:35.326 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:35.326 Cannot find device "nvmf_init_if" 00:09:35.326 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:09:35.326 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:35.326 Cannot find device "nvmf_init_if2" 00:09:35.326 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:09:35.326 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:35.326 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:35.326 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:09:35.326 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:35.326 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:35.326 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:09:35.326 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:35.326 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:35.326 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:35.326 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:35.326 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:35.326 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:35.326 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:35.326 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:35.326 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:35.326 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:35.326 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:35.326 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:35.326 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:35.326 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:35.326 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:35.326 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:35.326 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:35.326 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:35.326 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:35.326 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:35.326 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:35.326 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:35.326 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:35.326 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:35.326 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:35.326 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:35.586 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:35.586 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:35.586 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:35.586 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:35.586 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:35.586 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:35.586 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:35.586 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:35.586 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:09:35.586 00:09:35.586 --- 10.0.0.3 ping statistics --- 00:09:35.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.586 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:09:35.586 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:35.586 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:35.586 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:09:35.586 00:09:35.586 --- 10.0.0.4 ping statistics --- 00:09:35.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.586 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:09:35.586 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:35.586 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:35.586 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:09:35.586 00:09:35.586 --- 10.0.0.1 ping statistics --- 00:09:35.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.586 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:09:35.586 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:35.586 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:35.586 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:09:35.586 00:09:35.586 --- 10.0.0.2 ping statistics --- 00:09:35.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.586 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:09:35.586 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:35.586 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@457 -- # return 0 00:09:35.586 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:35.586 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:35.586 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:35.586 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:35.586 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:35.586 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:35.586 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:35.586 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:35.586 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:35.586 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:35.586 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.586 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # nvmfpid=77309 00:09:35.586 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:35.586 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # waitforlisten 77309 00:09:35.586 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 77309 ']' 00:09:35.586 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.586 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:35.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.586 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.587 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:35.587 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.587 [2024-11-19 12:29:40.705880] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:35.587 [2024-11-19 12:29:40.705972] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:35.846 [2024-11-19 12:29:40.851967] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.846 [2024-11-19 12:29:40.883871] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:35.846 [2024-11-19 12:29:40.883932] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:35.846 [2024-11-19 12:29:40.883941] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:35.846 [2024-11-19 12:29:40.883948] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:35.846 [2024-11-19 12:29:40.883954] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:35.846 [2024-11-19 12:29:40.883978] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.846 [2024-11-19 12:29:40.910444] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:35.846 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:35.846 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:35.846 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:35.846 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:35.846 12:29:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.846 12:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:35.846 12:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:35.846 12:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.846 12:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.846 [2024-11-19 12:29:41.013250] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:35.846 12:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.846 12:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:35.846 12:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.846 12:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.846 Malloc0 00:09:35.846 12:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.846 12:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:35.846 12:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.846 12:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.846 12:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.846 12:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:35.846 12:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.846 12:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.846 12:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.846 12:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:35.846 12:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.846 12:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.846 [2024-11-19 12:29:41.065635] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:35.846 12:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.846 12:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=77333 00:09:35.846 12:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:35.846 12:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:35.846 12:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 77333 /var/tmp/bdevperf.sock 00:09:35.846 12:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 77333 ']' 00:09:35.846 12:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:35.846 12:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:35.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:35.846 12:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:35.846 12:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:35.846 12:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:36.106 [2024-11-19 12:29:41.121005] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:36.106 [2024-11-19 12:29:41.121099] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77333 ] 00:09:36.106 [2024-11-19 12:29:41.256035] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.106 [2024-11-19 12:29:41.301147] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.106 [2024-11-19 12:29:41.335550] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:36.365 12:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:36.365 12:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:36.365 12:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:36.365 12:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.365 12:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:36.365 NVMe0n1 00:09:36.365 12:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.365 12:29:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:36.365 Running I/O for 10 seconds... 00:09:38.681 7238.00 IOPS, 28.27 MiB/s [2024-11-19T12:29:44.940Z] 8097.00 IOPS, 31.63 MiB/s [2024-11-19T12:29:45.898Z] 8392.67 IOPS, 32.78 MiB/s [2024-11-19T12:29:46.835Z] 8617.25 IOPS, 33.66 MiB/s [2024-11-19T12:29:47.772Z] 8778.60 IOPS, 34.29 MiB/s [2024-11-19T12:29:48.709Z] 8899.50 IOPS, 34.76 MiB/s [2024-11-19T12:29:49.644Z] 8944.71 IOPS, 34.94 MiB/s [2024-11-19T12:29:51.023Z] 8978.62 IOPS, 35.07 MiB/s [2024-11-19T12:29:51.960Z] 8993.11 IOPS, 35.13 MiB/s [2024-11-19T12:29:51.960Z] 8950.60 IOPS, 34.96 MiB/s 00:09:46.700 Latency(us) 00:09:46.700 [2024-11-19T12:29:51.960Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:46.700 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:46.700 Verification LBA range: start 0x0 length 0x4000 00:09:46.700 NVMe0n1 : 10.06 8993.03 35.13 0.00 0.00 113369.29 12094.37 86745.83 00:09:46.700 [2024-11-19T12:29:51.960Z] =================================================================================================================== 00:09:46.700 [2024-11-19T12:29:51.960Z] Total : 8993.03 35.13 0.00 0.00 113369.29 12094.37 86745.83 00:09:46.700 { 00:09:46.700 "results": [ 00:09:46.700 { 00:09:46.700 "job": "NVMe0n1", 00:09:46.700 "core_mask": "0x1", 00:09:46.700 "workload": "verify", 00:09:46.700 "status": "finished", 00:09:46.700 "verify_range": { 00:09:46.700 "start": 0, 00:09:46.700 "length": 16384 00:09:46.700 }, 00:09:46.700 "queue_depth": 1024, 00:09:46.700 "io_size": 4096, 00:09:46.700 "runtime": 10.064679, 00:09:46.700 "iops": 8993.03395567807, 00:09:46.700 "mibps": 35.12903888936746, 00:09:46.700 "io_failed": 0, 00:09:46.700 "io_timeout": 0, 00:09:46.700 "avg_latency_us": 113369.29001237405, 00:09:46.700 "min_latency_us": 12094.370909090909, 00:09:46.700 "max_latency_us": 86745.83272727273 00:09:46.700 } 00:09:46.700 ], 00:09:46.700 "core_count": 1 00:09:46.700 } 00:09:46.700 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 77333 00:09:46.700 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 77333 ']' 00:09:46.700 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 77333 00:09:46.700 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:46.700 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:46.700 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77333 00:09:46.700 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:46.700 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:46.700 killing process with pid 77333 00:09:46.700 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77333' 00:09:46.700 Received shutdown signal, test time was about 10.000000 seconds 00:09:46.700 00:09:46.700 Latency(us) 00:09:46.700 [2024-11-19T12:29:51.960Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:46.700 [2024-11-19T12:29:51.960Z] =================================================================================================================== 00:09:46.700 [2024-11-19T12:29:51.960Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:46.700 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 77333 00:09:46.700 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 77333 00:09:46.700 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:46.700 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:46.700 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:46.700 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:46.700 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:46.700 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:46.700 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:46.700 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:46.700 rmmod nvme_tcp 00:09:46.700 rmmod nvme_fabrics 00:09:46.700 rmmod nvme_keyring 00:09:46.700 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:46.700 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:46.700 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:46.700 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@513 -- # '[' -n 77309 ']' 00:09:46.700 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # killprocess 77309 00:09:46.700 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 77309 ']' 00:09:46.700 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 77309 00:09:46.959 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:46.959 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:46.959 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77309 00:09:46.959 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:46.959 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:46.959 killing process with pid 77309 00:09:46.959 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77309' 00:09:46.959 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 77309 00:09:46.959 12:29:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 77309 00:09:46.959 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:46.959 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:46.959 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:46.959 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:46.959 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:46.959 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-save 00:09:46.959 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-restore 00:09:46.959 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:46.959 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:46.959 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:46.959 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:46.959 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:46.959 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:47.218 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:47.218 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:47.218 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:47.218 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:47.218 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:47.218 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:47.218 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:47.218 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:47.218 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:47.218 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:47.218 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.218 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.218 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.218 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:09:47.218 00:09:47.218 real 0m12.388s 00:09:47.218 user 0m20.994s 00:09:47.218 sys 0m2.205s 00:09:47.218 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:47.218 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:47.218 ************************************ 00:09:47.218 END TEST nvmf_queue_depth 00:09:47.218 ************************************ 00:09:47.218 12:29:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:47.218 12:29:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:47.218 12:29:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:47.218 12:29:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:47.218 ************************************ 00:09:47.218 START TEST nvmf_target_multipath 00:09:47.218 ************************************ 00:09:47.218 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:47.478 * Looking for test storage... 00:09:47.478 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:47.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.478 --rc genhtml_branch_coverage=1 00:09:47.478 --rc genhtml_function_coverage=1 00:09:47.478 --rc genhtml_legend=1 00:09:47.478 --rc geninfo_all_blocks=1 00:09:47.478 --rc geninfo_unexecuted_blocks=1 00:09:47.478 00:09:47.478 ' 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:47.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.478 --rc genhtml_branch_coverage=1 00:09:47.478 --rc genhtml_function_coverage=1 00:09:47.478 --rc genhtml_legend=1 00:09:47.478 --rc geninfo_all_blocks=1 00:09:47.478 --rc geninfo_unexecuted_blocks=1 00:09:47.478 00:09:47.478 ' 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:47.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.478 --rc genhtml_branch_coverage=1 00:09:47.478 --rc genhtml_function_coverage=1 00:09:47.478 --rc genhtml_legend=1 00:09:47.478 --rc geninfo_all_blocks=1 00:09:47.478 --rc geninfo_unexecuted_blocks=1 00:09:47.478 00:09:47.478 ' 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:47.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.478 --rc genhtml_branch_coverage=1 00:09:47.478 --rc genhtml_function_coverage=1 00:09:47.478 --rc genhtml_legend=1 00:09:47.478 --rc geninfo_all_blocks=1 00:09:47.478 --rc geninfo_unexecuted_blocks=1 00:09:47.478 00:09:47.478 ' 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:47.478 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:47.479 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@456 -- # nvmf_veth_init 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:47.479 Cannot find device "nvmf_init_br" 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:47.479 Cannot find device "nvmf_init_br2" 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:47.479 Cannot find device "nvmf_tgt_br" 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:09:47.479 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:47.738 Cannot find device "nvmf_tgt_br2" 00:09:47.738 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:09:47.738 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:47.738 Cannot find device "nvmf_init_br" 00:09:47.738 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:09:47.738 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:47.738 Cannot find device "nvmf_init_br2" 00:09:47.738 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:09:47.738 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:47.738 Cannot find device "nvmf_tgt_br" 00:09:47.738 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:09:47.738 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:47.738 Cannot find device "nvmf_tgt_br2" 00:09:47.738 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:09:47.738 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:47.738 Cannot find device "nvmf_br" 00:09:47.738 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:09:47.738 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:47.738 Cannot find device "nvmf_init_if" 00:09:47.738 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:09:47.738 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:47.738 Cannot find device "nvmf_init_if2" 00:09:47.738 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:09:47.738 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:47.738 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:47.738 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:09:47.738 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:47.738 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:47.738 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:09:47.738 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:47.738 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:47.738 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:47.738 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:47.738 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:47.738 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:47.738 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:47.739 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:47.739 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:47.739 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:47.739 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:47.739 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:47.739 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:47.739 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:47.739 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:47.998 12:29:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:47.998 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:47.998 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:47.998 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:47.998 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:47.998 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:47.998 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:47.998 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:47.998 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:47.998 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:47.998 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:47.998 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:47.998 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:47.998 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:47.998 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:47.998 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:47.998 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:47.998 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:47.998 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:47.998 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:09:47.998 00:09:47.998 --- 10.0.0.3 ping statistics --- 00:09:47.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.998 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:09:47.998 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:47.998 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:47.998 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:09:47.998 00:09:47.998 --- 10.0.0.4 ping statistics --- 00:09:47.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.998 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:09:47.998 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:47.998 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:47.998 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:09:47.998 00:09:47.998 --- 10.0.0.1 ping statistics --- 00:09:47.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.998 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:09:47.998 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:47.998 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:47.998 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:09:47.998 00:09:47.998 --- 10.0.0.2 ping statistics --- 00:09:47.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.998 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:09:47.998 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:47.998 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@457 -- # return 0 00:09:47.998 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:47.998 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:47.998 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:47.998 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:47.998 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:47.998 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:47.998 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:47.998 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:09:47.998 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:47.998 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:47.998 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:47.998 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:47.998 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:47.998 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@505 -- # nvmfpid=77697 00:09:47.998 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@506 -- # waitforlisten 77697 00:09:47.998 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@831 -- # '[' -z 77697 ']' 00:09:47.998 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.998 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:47.998 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:47.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.998 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.998 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:47.998 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:47.998 [2024-11-19 12:29:53.206089] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:47.998 [2024-11-19 12:29:53.206224] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:48.258 [2024-11-19 12:29:53.350139] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:48.258 [2024-11-19 12:29:53.394656] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:48.258 [2024-11-19 12:29:53.394782] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:48.258 [2024-11-19 12:29:53.394807] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:48.258 [2024-11-19 12:29:53.394817] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:48.258 [2024-11-19 12:29:53.394826] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:48.258 [2024-11-19 12:29:53.395516] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:48.258 [2024-11-19 12:29:53.395654] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:48.258 [2024-11-19 12:29:53.395834] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:48.258 [2024-11-19 12:29:53.395840] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.258 [2024-11-19 12:29:53.430837] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:48.258 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:48.258 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # return 0 00:09:48.258 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:48.258 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:48.258 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:48.516 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:48.516 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:48.774 [2024-11-19 12:29:53.805108] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:48.774 12:29:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:49.033 Malloc0 00:09:49.033 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:49.292 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:49.551 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:49.809 [2024-11-19 12:29:54.936600] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:49.809 12:29:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:09:50.068 [2024-11-19 12:29:55.180842] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:09:50.068 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid=bae1b18f-cc14-461e-aa63-e888be1a2cc9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:50.327 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid=bae1b18f-cc14-461e-aa63-e888be1a2cc9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:09:50.327 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:50.327 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:09:50.327 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:50.327 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:50.327 12:29:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:09:52.229 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:52.229 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:52.229 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:52.229 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:52.229 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:52.229 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:09:52.229 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:09:52.229 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:09:52.229 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:09:52.229 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:52.229 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:09:52.488 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:09:52.488 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:09:52.488 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:09:52.488 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:09:52.488 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:09:52.488 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:09:52.489 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:09:52.489 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:09:52.489 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:09:52.489 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:52.489 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:52.489 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:52.489 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:52.489 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:52.489 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:09:52.489 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:52.489 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:52.489 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:52.489 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:52.489 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:52.489 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:09:52.489 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=77779 00:09:52.489 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:52.489 12:29:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:09:52.489 [global] 00:09:52.489 thread=1 00:09:52.489 invalidate=1 00:09:52.489 rw=randrw 00:09:52.489 time_based=1 00:09:52.489 runtime=6 00:09:52.489 ioengine=libaio 00:09:52.489 direct=1 00:09:52.489 bs=4096 00:09:52.489 iodepth=128 00:09:52.489 norandommap=0 00:09:52.489 numjobs=1 00:09:52.489 00:09:52.489 verify_dump=1 00:09:52.489 verify_backlog=512 00:09:52.489 verify_state_save=0 00:09:52.489 do_verify=1 00:09:52.489 verify=crc32c-intel 00:09:52.489 [job0] 00:09:52.489 filename=/dev/nvme0n1 00:09:52.489 Could not set queue depth (nvme0n1) 00:09:52.489 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:52.489 fio-3.35 00:09:52.489 Starting 1 thread 00:09:53.423 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:53.682 12:29:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:09:53.940 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:09:53.940 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:53.940 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:53.940 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:53.940 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:53.940 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:53.940 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:09:53.940 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:53.940 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:53.940 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:53.940 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:53.940 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:53.940 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:54.198 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:09:54.456 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:09:54.456 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:54.456 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:54.456 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:54.456 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:54.456 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:54.456 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:09:54.456 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:54.456 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:54.456 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:54.456 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:54.456 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:54.456 12:29:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 77779 00:09:58.645 00:09:58.645 job0: (groupid=0, jobs=1): err= 0: pid=77800: Tue Nov 19 12:30:03 2024 00:09:58.645 read: IOPS=10.2k, BW=39.9MiB/s (41.9MB/s)(240MiB/6006msec) 00:09:58.645 slat (usec): min=3, max=6306, avg=57.81, stdev=227.37 00:09:58.645 clat (usec): min=1525, max=16588, avg=8539.07, stdev=1470.30 00:09:58.645 lat (usec): min=1539, max=17347, avg=8596.88, stdev=1473.89 00:09:58.645 clat percentiles (usec): 00:09:58.645 | 1.00th=[ 4359], 5.00th=[ 6456], 10.00th=[ 7308], 20.00th=[ 7832], 00:09:58.645 | 30.00th=[ 8094], 40.00th=[ 8291], 50.00th=[ 8455], 60.00th=[ 8586], 00:09:58.645 | 70.00th=[ 8848], 80.00th=[ 9110], 90.00th=[ 9765], 95.00th=[11994], 00:09:58.645 | 99.00th=[13304], 99.50th=[13698], 99.90th=[14091], 99.95th=[14484], 00:09:58.645 | 99.99th=[16450] 00:09:58.645 bw ( KiB/s): min= 4272, max=25992, per=51.12%, avg=20902.73, stdev=7190.32, samples=11 00:09:58.645 iops : min= 1068, max= 6498, avg=5225.64, stdev=1797.57, samples=11 00:09:58.645 write: IOPS=6026, BW=23.5MiB/s (24.7MB/s)(125MiB/5304msec); 0 zone resets 00:09:58.645 slat (usec): min=7, max=2286, avg=66.36, stdev=161.35 00:09:58.645 clat (usec): min=1175, max=17370, avg=7436.82, stdev=1308.49 00:09:58.645 lat (usec): min=1206, max=17392, avg=7503.18, stdev=1312.45 00:09:58.645 clat percentiles (usec): 00:09:58.645 | 1.00th=[ 3359], 5.00th=[ 4424], 10.00th=[ 5932], 20.00th=[ 6915], 00:09:58.645 | 30.00th=[ 7242], 40.00th=[ 7439], 50.00th=[ 7635], 60.00th=[ 7832], 00:09:58.645 | 70.00th=[ 8029], 80.00th=[ 8225], 90.00th=[ 8455], 95.00th=[ 8848], 00:09:58.645 | 99.00th=[11338], 99.50th=[12125], 99.90th=[13829], 99.95th=[14353], 00:09:58.645 | 99.99th=[15926] 00:09:58.645 bw ( KiB/s): min= 4616, max=26304, per=87.03%, avg=20979.00, stdev=7140.27, samples=11 00:09:58.645 iops : min= 1154, max= 6576, avg=5244.73, stdev=1785.06, samples=11 00:09:58.645 lat (msec) : 2=0.01%, 4=1.45%, 10=92.59%, 20=5.94% 00:09:58.645 cpu : usr=5.48%, sys=21.28%, ctx=5386, majf=0, minf=114 00:09:58.645 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:58.645 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.645 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:58.645 issued rwts: total=61392,31964,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.645 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:58.645 00:09:58.645 Run status group 0 (all jobs): 00:09:58.645 READ: bw=39.9MiB/s (41.9MB/s), 39.9MiB/s-39.9MiB/s (41.9MB/s-41.9MB/s), io=240MiB (251MB), run=6006-6006msec 00:09:58.645 WRITE: bw=23.5MiB/s (24.7MB/s), 23.5MiB/s-23.5MiB/s (24.7MB/s-24.7MB/s), io=125MiB (131MB), run=5304-5304msec 00:09:58.645 00:09:58.645 Disk stats (read/write): 00:09:58.645 nvme0n1: ios=60519/31403, merge=0/0, ticks=495738/218778, in_queue=714516, util=98.63% 00:09:58.645 12:30:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:09:58.904 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:09:59.163 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:09:59.163 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:59.163 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:59.163 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:59.163 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:59.163 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:59.163 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:09:59.163 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:59.163 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:59.163 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:59.163 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:59.163 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:59.163 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:09:59.163 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=77888 00:09:59.163 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:59.163 12:30:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:09:59.163 [global] 00:09:59.163 thread=1 00:09:59.163 invalidate=1 00:09:59.163 rw=randrw 00:09:59.163 time_based=1 00:09:59.163 runtime=6 00:09:59.163 ioengine=libaio 00:09:59.163 direct=1 00:09:59.163 bs=4096 00:09:59.163 iodepth=128 00:09:59.163 norandommap=0 00:09:59.163 numjobs=1 00:09:59.163 00:09:59.421 verify_dump=1 00:09:59.421 verify_backlog=512 00:09:59.421 verify_state_save=0 00:09:59.421 do_verify=1 00:09:59.421 verify=crc32c-intel 00:09:59.421 [job0] 00:09:59.421 filename=/dev/nvme0n1 00:09:59.421 Could not set queue depth (nvme0n1) 00:09:59.421 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:59.421 fio-3.35 00:09:59.421 Starting 1 thread 00:10:00.358 12:30:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:00.616 12:30:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:10:00.875 12:30:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:10:00.875 12:30:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:00.875 12:30:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:00.875 12:30:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:00.875 12:30:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:00.875 12:30:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:00.875 12:30:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:10:00.875 12:30:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:00.875 12:30:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:00.875 12:30:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:00.875 12:30:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:00.875 12:30:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:00.875 12:30:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:01.442 12:30:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:10:01.442 12:30:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:10:01.442 12:30:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:01.442 12:30:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:01.442 12:30:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:01.442 12:30:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:01.442 12:30:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:01.442 12:30:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:10:01.442 12:30:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:01.442 12:30:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:01.442 12:30:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:01.442 12:30:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:01.442 12:30:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:01.442 12:30:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 77888 00:10:05.631 00:10:05.631 job0: (groupid=0, jobs=1): err= 0: pid=77909: Tue Nov 19 12:30:10 2024 00:10:05.631 read: IOPS=11.3k, BW=44.3MiB/s (46.4MB/s)(266MiB/6007msec) 00:10:05.631 slat (usec): min=7, max=7308, avg=43.41, stdev=187.54 00:10:05.631 clat (usec): min=619, max=15648, avg=7701.70, stdev=1967.89 00:10:05.631 lat (usec): min=629, max=15684, avg=7745.11, stdev=1982.89 00:10:05.631 clat percentiles (usec): 00:10:05.632 | 1.00th=[ 2868], 5.00th=[ 4047], 10.00th=[ 4883], 20.00th=[ 5997], 00:10:05.632 | 30.00th=[ 7177], 40.00th=[ 7767], 50.00th=[ 8029], 60.00th=[ 8291], 00:10:05.632 | 70.00th=[ 8586], 80.00th=[ 8848], 90.00th=[ 9503], 95.00th=[10683], 00:10:05.632 | 99.00th=[13042], 99.50th=[13435], 99.90th=[14091], 99.95th=[14222], 00:10:05.632 | 99.99th=[15139] 00:10:05.632 bw ( KiB/s): min= 8656, max=38056, per=53.15%, avg=24085.09, stdev=8984.25, samples=11 00:10:05.632 iops : min= 2164, max= 9514, avg=6021.27, stdev=2246.06, samples=11 00:10:05.632 write: IOPS=6960, BW=27.2MiB/s (28.5MB/s)(142MiB/5214msec); 0 zone resets 00:10:05.632 slat (usec): min=14, max=3182, avg=55.51, stdev=139.34 00:10:05.632 clat (usec): min=760, max=15093, avg=6572.93, stdev=1872.52 00:10:05.632 lat (usec): min=826, max=15117, avg=6628.43, stdev=1888.31 00:10:05.632 clat percentiles (usec): 00:10:05.632 | 1.00th=[ 2507], 5.00th=[ 3326], 10.00th=[ 3752], 20.00th=[ 4490], 00:10:05.632 | 30.00th=[ 5473], 40.00th=[ 6783], 50.00th=[ 7242], 60.00th=[ 7504], 00:10:05.632 | 70.00th=[ 7767], 80.00th=[ 8029], 90.00th=[ 8455], 95.00th=[ 8717], 00:10:05.632 | 99.00th=[10814], 99.50th=[11731], 99.90th=[13304], 99.95th=[14091], 00:10:05.632 | 99.99th=[14484] 00:10:05.632 bw ( KiB/s): min= 9080, max=37344, per=86.63%, avg=24120.00, stdev=8771.76, samples=11 00:10:05.632 iops : min= 2270, max= 9336, avg=6030.00, stdev=2192.94, samples=11 00:10:05.632 lat (usec) : 750=0.01%, 1000=0.02% 00:10:05.632 lat (msec) : 2=0.28%, 4=7.39%, 10=87.78%, 20=4.52% 00:10:05.632 cpu : usr=6.29%, sys=22.91%, ctx=5984, majf=0, minf=127 00:10:05.632 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:05.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.632 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:05.632 issued rwts: total=68058,36291,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:05.632 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:05.632 00:10:05.632 Run status group 0 (all jobs): 00:10:05.632 READ: bw=44.3MiB/s (46.4MB/s), 44.3MiB/s-44.3MiB/s (46.4MB/s-46.4MB/s), io=266MiB (279MB), run=6007-6007msec 00:10:05.632 WRITE: bw=27.2MiB/s (28.5MB/s), 27.2MiB/s-27.2MiB/s (28.5MB/s-28.5MB/s), io=142MiB (149MB), run=5214-5214msec 00:10:05.632 00:10:05.632 Disk stats (read/write): 00:10:05.632 nvme0n1: ios=67202/35722, merge=0/0, ticks=495094/218523, in_queue=713617, util=98.58% 00:10:05.632 12:30:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:05.632 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:05.632 12:30:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:05.632 12:30:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:10:05.632 12:30:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:05.632 12:30:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:05.891 12:30:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:05.891 12:30:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:05.891 12:30:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:10:05.891 12:30:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:06.150 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:10:06.150 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:10:06.150 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:10:06.150 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:10:06.150 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:06.150 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:06.150 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:06.150 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:06.150 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:06.150 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:06.150 rmmod nvme_tcp 00:10:06.150 rmmod nvme_fabrics 00:10:06.150 rmmod nvme_keyring 00:10:06.150 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:06.150 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:06.150 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:06.150 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n 77697 ']' 00:10:06.150 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # killprocess 77697 00:10:06.150 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@950 -- # '[' -z 77697 ']' 00:10:06.150 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # kill -0 77697 00:10:06.150 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # uname 00:10:06.150 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:06.150 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77697 00:10:06.150 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:06.150 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:06.150 killing process with pid 77697 00:10:06.150 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77697' 00:10:06.150 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@969 -- # kill 77697 00:10:06.150 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@974 -- # wait 77697 00:10:06.410 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:06.410 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:06.410 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:06.410 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:06.410 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:06.410 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:10:06.410 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:10:06.410 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:06.410 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:06.410 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:06.410 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:06.410 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:06.410 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:06.410 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:06.410 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:06.410 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:06.410 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:06.410 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:06.410 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:06.410 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:06.669 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:06.669 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:06.669 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:06.669 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.669 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:06.669 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.669 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:10:06.669 00:10:06.669 real 0m19.288s 00:10:06.669 user 1m11.084s 00:10:06.669 sys 0m9.935s 00:10:06.669 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:06.669 ************************************ 00:10:06.669 END TEST nvmf_target_multipath 00:10:06.669 ************************************ 00:10:06.669 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:06.669 12:30:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:06.669 12:30:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:06.669 12:30:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:06.669 12:30:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:06.669 ************************************ 00:10:06.669 START TEST nvmf_zcopy 00:10:06.669 ************************************ 00:10:06.669 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:06.669 * Looking for test storage... 00:10:06.669 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:06.669 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:06.669 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:10:06.669 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:06.929 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:06.929 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:06.929 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:06.929 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:06.929 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:06.929 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:06.929 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:06.929 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:06.929 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:06.929 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:06.929 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:06.929 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:06.929 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:06.929 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:06.929 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:06.929 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:06.929 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:06.929 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:06.929 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:06.929 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:06.929 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:06.929 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:06.929 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:06.929 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:06.929 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:06.929 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:06.929 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:06.929 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:06.929 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:06.929 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:06.929 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:06.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.929 --rc genhtml_branch_coverage=1 00:10:06.929 --rc genhtml_function_coverage=1 00:10:06.929 --rc genhtml_legend=1 00:10:06.929 --rc geninfo_all_blocks=1 00:10:06.929 --rc geninfo_unexecuted_blocks=1 00:10:06.929 00:10:06.929 ' 00:10:06.929 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:06.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.929 --rc genhtml_branch_coverage=1 00:10:06.929 --rc genhtml_function_coverage=1 00:10:06.929 --rc genhtml_legend=1 00:10:06.929 --rc geninfo_all_blocks=1 00:10:06.929 --rc geninfo_unexecuted_blocks=1 00:10:06.929 00:10:06.929 ' 00:10:06.929 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:06.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.929 --rc genhtml_branch_coverage=1 00:10:06.929 --rc genhtml_function_coverage=1 00:10:06.929 --rc genhtml_legend=1 00:10:06.929 --rc geninfo_all_blocks=1 00:10:06.929 --rc geninfo_unexecuted_blocks=1 00:10:06.929 00:10:06.929 ' 00:10:06.929 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:06.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.929 --rc genhtml_branch_coverage=1 00:10:06.929 --rc genhtml_function_coverage=1 00:10:06.929 --rc genhtml_legend=1 00:10:06.929 --rc geninfo_all_blocks=1 00:10:06.929 --rc geninfo_unexecuted_blocks=1 00:10:06.929 00:10:06.929 ' 00:10:06.929 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:06.929 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:06.929 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:06.929 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:06.929 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:06.929 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:06.929 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:06.929 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:06.929 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:06.929 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:06.929 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:06.929 12:30:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:06.929 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:10:06.929 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:10:06.929 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:06.929 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:06.929 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:06.929 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:06.929 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:06.929 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:06.929 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:06.929 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:06.929 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:06.929 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.929 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:06.930 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:06.930 Cannot find device "nvmf_init_br" 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:06.930 Cannot find device "nvmf_init_br2" 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:06.930 Cannot find device "nvmf_tgt_br" 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:06.930 Cannot find device "nvmf_tgt_br2" 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:06.930 Cannot find device "nvmf_init_br" 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:06.930 Cannot find device "nvmf_init_br2" 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:06.930 Cannot find device "nvmf_tgt_br" 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:06.930 Cannot find device "nvmf_tgt_br2" 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:06.930 Cannot find device "nvmf_br" 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:10:06.930 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:06.931 Cannot find device "nvmf_init_if" 00:10:06.931 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:10:06.931 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:06.931 Cannot find device "nvmf_init_if2" 00:10:06.931 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:10:06.931 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:06.931 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:06.931 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:10:06.931 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:06.931 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:06.931 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:10:06.931 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:06.931 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:06.931 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:06.931 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:06.931 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:06.931 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:07.190 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:07.190 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:07.190 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:07.190 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:07.190 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:07.190 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:07.190 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:07.190 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:07.190 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:07.190 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:07.190 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:07.190 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:07.191 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:07.191 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:07.191 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:07.191 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:07.191 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:07.191 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:07.191 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:07.191 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:07.191 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:07.191 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:07.191 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:07.191 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:07.191 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:07.191 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:07.191 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:07.191 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:07.191 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.320 ms 00:10:07.191 00:10:07.191 --- 10.0.0.3 ping statistics --- 00:10:07.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.191 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:10:07.191 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:07.191 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:07.191 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:10:07.191 00:10:07.191 --- 10.0.0.4 ping statistics --- 00:10:07.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.191 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:10:07.191 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:07.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:07.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:10:07.191 00:10:07.191 --- 10.0.0.1 ping statistics --- 00:10:07.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.191 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:10:07.191 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:07.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:07.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:10:07.191 00:10:07.191 --- 10.0.0.2 ping statistics --- 00:10:07.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.191 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:10:07.191 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:07.191 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@457 -- # return 0 00:10:07.191 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:07.191 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:07.191 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:07.191 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:07.191 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:07.191 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:07.191 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:07.191 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:07.191 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:07.191 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:07.191 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.191 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # nvmfpid=78214 00:10:07.191 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # waitforlisten 78214 00:10:07.191 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:07.191 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 78214 ']' 00:10:07.191 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.191 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:07.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.191 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.191 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:07.191 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.451 [2024-11-19 12:30:12.462735] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:07.451 [2024-11-19 12:30:12.462839] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:07.451 [2024-11-19 12:30:12.592093] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.451 [2024-11-19 12:30:12.623571] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:07.451 [2024-11-19 12:30:12.623638] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:07.451 [2024-11-19 12:30:12.623663] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:07.451 [2024-11-19 12:30:12.623686] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:07.451 [2024-11-19 12:30:12.623693] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:07.451 [2024-11-19 12:30:12.623733] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:07.451 [2024-11-19 12:30:12.650922] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:07.711 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:07.711 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:10:07.711 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:07.711 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:07.711 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.711 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:07.711 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:07.711 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:07.711 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.711 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.711 [2024-11-19 12:30:12.779379] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:07.711 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.711 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:07.711 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.711 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.711 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.711 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:07.711 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.711 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.711 [2024-11-19 12:30:12.795842] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:07.711 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.711 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:07.711 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.711 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.711 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.711 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:07.711 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.711 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.711 malloc0 00:10:07.711 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.711 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:07.711 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.711 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.711 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.711 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:07.711 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:07.711 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:10:07.711 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:10:07.711 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:07.711 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:07.711 { 00:10:07.711 "params": { 00:10:07.711 "name": "Nvme$subsystem", 00:10:07.711 "trtype": "$TEST_TRANSPORT", 00:10:07.711 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:07.711 "adrfam": "ipv4", 00:10:07.711 "trsvcid": "$NVMF_PORT", 00:10:07.711 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:07.711 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:07.711 "hdgst": ${hdgst:-false}, 00:10:07.711 "ddgst": ${ddgst:-false} 00:10:07.711 }, 00:10:07.711 "method": "bdev_nvme_attach_controller" 00:10:07.711 } 00:10:07.711 EOF 00:10:07.711 )") 00:10:07.711 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:10:07.711 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:10:07.711 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:10:07.711 12:30:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:07.711 "params": { 00:10:07.711 "name": "Nvme1", 00:10:07.711 "trtype": "tcp", 00:10:07.711 "traddr": "10.0.0.3", 00:10:07.711 "adrfam": "ipv4", 00:10:07.711 "trsvcid": "4420", 00:10:07.711 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:07.711 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:07.711 "hdgst": false, 00:10:07.711 "ddgst": false 00:10:07.711 }, 00:10:07.711 "method": "bdev_nvme_attach_controller" 00:10:07.711 }' 00:10:07.711 [2024-11-19 12:30:12.908786] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:07.711 [2024-11-19 12:30:12.908884] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78239 ] 00:10:07.971 [2024-11-19 12:30:13.049641] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.971 [2024-11-19 12:30:13.092340] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.971 [2024-11-19 12:30:13.134609] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:08.230 Running I/O for 10 seconds... 00:10:10.104 6361.00 IOPS, 49.70 MiB/s [2024-11-19T12:30:16.313Z] 6284.00 IOPS, 49.09 MiB/s [2024-11-19T12:30:17.248Z] 6314.00 IOPS, 49.33 MiB/s [2024-11-19T12:30:18.631Z] 6380.50 IOPS, 49.85 MiB/s [2024-11-19T12:30:19.573Z] 6405.40 IOPS, 50.04 MiB/s [2024-11-19T12:30:20.510Z] 6402.50 IOPS, 50.02 MiB/s [2024-11-19T12:30:21.444Z] 6383.71 IOPS, 49.87 MiB/s [2024-11-19T12:30:22.379Z] 6365.88 IOPS, 49.73 MiB/s [2024-11-19T12:30:23.314Z] 6347.67 IOPS, 49.59 MiB/s [2024-11-19T12:30:23.315Z] 6351.80 IOPS, 49.62 MiB/s 00:10:18.055 Latency(us) 00:10:18.055 [2024-11-19T12:30:23.315Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:18.055 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:18.055 Verification LBA range: start 0x0 length 0x1000 00:10:18.055 Nvme1n1 : 10.01 6355.31 49.65 0.00 0.00 20077.36 1526.69 32887.16 00:10:18.055 [2024-11-19T12:30:23.315Z] =================================================================================================================== 00:10:18.055 [2024-11-19T12:30:23.315Z] Total : 6355.31 49.65 0.00 0.00 20077.36 1526.69 32887.16 00:10:18.313 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=78357 00:10:18.313 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:18.313 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.313 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:18.313 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:18.313 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:10:18.313 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:10:18.313 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:18.313 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:18.313 { 00:10:18.313 "params": { 00:10:18.313 "name": "Nvme$subsystem", 00:10:18.313 "trtype": "$TEST_TRANSPORT", 00:10:18.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:18.313 "adrfam": "ipv4", 00:10:18.313 "trsvcid": "$NVMF_PORT", 00:10:18.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:18.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:18.313 "hdgst": ${hdgst:-false}, 00:10:18.313 "ddgst": ${ddgst:-false} 00:10:18.313 }, 00:10:18.313 "method": "bdev_nvme_attach_controller" 00:10:18.313 } 00:10:18.313 EOF 00:10:18.313 )") 00:10:18.313 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:10:18.313 [2024-11-19 12:30:23.402334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.313 [2024-11-19 12:30:23.402377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.313 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:10:18.313 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:10:18.313 12:30:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:18.313 "params": { 00:10:18.313 "name": "Nvme1", 00:10:18.313 "trtype": "tcp", 00:10:18.313 "traddr": "10.0.0.3", 00:10:18.313 "adrfam": "ipv4", 00:10:18.313 "trsvcid": "4420", 00:10:18.313 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:18.313 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:18.313 "hdgst": false, 00:10:18.313 "ddgst": false 00:10:18.313 }, 00:10:18.313 "method": "bdev_nvme_attach_controller" 00:10:18.313 }' 00:10:18.313 [2024-11-19 12:30:23.414287] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.313 [2024-11-19 12:30:23.414331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.313 [2024-11-19 12:30:23.422288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.313 [2024-11-19 12:30:23.422329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.313 [2024-11-19 12:30:23.434295] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.313 [2024-11-19 12:30:23.434336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.313 [2024-11-19 12:30:23.446290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.313 [2024-11-19 12:30:23.446330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.313 [2024-11-19 12:30:23.458299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.313 [2024-11-19 12:30:23.458339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.313 [2024-11-19 12:30:23.465881] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:18.313 [2024-11-19 12:30:23.465967] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78357 ] 00:10:18.313 [2024-11-19 12:30:23.470302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.313 [2024-11-19 12:30:23.470329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.313 [2024-11-19 12:30:23.482336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.313 [2024-11-19 12:30:23.482376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.313 [2024-11-19 12:30:23.494304] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.313 [2024-11-19 12:30:23.494343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.313 [2024-11-19 12:30:23.506306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.313 [2024-11-19 12:30:23.506345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.313 [2024-11-19 12:30:23.518307] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.313 [2024-11-19 12:30:23.518346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.313 [2024-11-19 12:30:23.526310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.313 [2024-11-19 12:30:23.526348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.313 [2024-11-19 12:30:23.538314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.313 [2024-11-19 12:30:23.538353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.313 [2024-11-19 12:30:23.550314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.313 [2024-11-19 12:30:23.550353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.313 [2024-11-19 12:30:23.562322] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.313 [2024-11-19 12:30:23.562362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.572 [2024-11-19 12:30:23.574338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.572 [2024-11-19 12:30:23.574400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.572 [2024-11-19 12:30:23.586329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.572 [2024-11-19 12:30:23.586370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.572 [2024-11-19 12:30:23.598333] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.572 [2024-11-19 12:30:23.598373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.572 [2024-11-19 12:30:23.606992] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.572 [2024-11-19 12:30:23.610336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.572 [2024-11-19 12:30:23.610379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.572 [2024-11-19 12:30:23.622350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.572 [2024-11-19 12:30:23.622399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.573 [2024-11-19 12:30:23.634364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.573 [2024-11-19 12:30:23.634416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.573 [2024-11-19 12:30:23.643288] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.573 [2024-11-19 12:30:23.646348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.573 [2024-11-19 12:30:23.646389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.573 [2024-11-19 12:30:23.658361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.573 [2024-11-19 12:30:23.658411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.573 [2024-11-19 12:30:23.670375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.573 [2024-11-19 12:30:23.670430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.573 [2024-11-19 12:30:23.681743] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:18.573 [2024-11-19 12:30:23.682382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.573 [2024-11-19 12:30:23.682415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.573 [2024-11-19 12:30:23.694384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.573 [2024-11-19 12:30:23.694440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.573 [2024-11-19 12:30:23.706360] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.573 [2024-11-19 12:30:23.706399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.573 [2024-11-19 12:30:23.718429] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.573 [2024-11-19 12:30:23.718476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.573 [2024-11-19 12:30:23.730431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.573 [2024-11-19 12:30:23.730476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.573 [2024-11-19 12:30:23.742436] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.573 [2024-11-19 12:30:23.742480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.573 [2024-11-19 12:30:23.754445] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.573 [2024-11-19 12:30:23.754489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.573 [2024-11-19 12:30:23.766474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.573 [2024-11-19 12:30:23.766520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.573 [2024-11-19 12:30:23.778471] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.573 [2024-11-19 12:30:23.778520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.573 Running I/O for 5 seconds... 00:10:18.573 [2024-11-19 12:30:23.794422] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.573 [2024-11-19 12:30:23.794472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.573 [2024-11-19 12:30:23.812285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.573 [2024-11-19 12:30:23.812334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.573 [2024-11-19 12:30:23.827215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.573 [2024-11-19 12:30:23.827268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.832 [2024-11-19 12:30:23.837173] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.832 [2024-11-19 12:30:23.837239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.832 [2024-11-19 12:30:23.852494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.832 [2024-11-19 12:30:23.852542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.832 [2024-11-19 12:30:23.869496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.832 [2024-11-19 12:30:23.869546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.832 [2024-11-19 12:30:23.886559] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.832 [2024-11-19 12:30:23.886608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.832 [2024-11-19 12:30:23.902986] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.832 [2024-11-19 12:30:23.903021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.832 [2024-11-19 12:30:23.919244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.832 [2024-11-19 12:30:23.919307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.832 [2024-11-19 12:30:23.935636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.832 [2024-11-19 12:30:23.935710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.832 [2024-11-19 12:30:23.952468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.832 [2024-11-19 12:30:23.952516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.832 [2024-11-19 12:30:23.969051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.832 [2024-11-19 12:30:23.969099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.832 [2024-11-19 12:30:23.985636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.832 [2024-11-19 12:30:23.985713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.832 [2024-11-19 12:30:24.001486] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.832 [2024-11-19 12:30:24.001534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.832 [2024-11-19 12:30:24.019723] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.832 [2024-11-19 12:30:24.019781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.832 [2024-11-19 12:30:24.035043] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.832 [2024-11-19 12:30:24.035094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.832 [2024-11-19 12:30:24.046311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.832 [2024-11-19 12:30:24.046360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.832 [2024-11-19 12:30:24.062467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.832 [2024-11-19 12:30:24.062514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.832 [2024-11-19 12:30:24.078070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.832 [2024-11-19 12:30:24.078119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.832 [2024-11-19 12:30:24.087082] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.832 [2024-11-19 12:30:24.087134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.092 [2024-11-19 12:30:24.103090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.092 [2024-11-19 12:30:24.103171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.092 [2024-11-19 12:30:24.112323] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.092 [2024-11-19 12:30:24.112371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.092 [2024-11-19 12:30:24.127320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.092 [2024-11-19 12:30:24.127368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.092 [2024-11-19 12:30:24.143430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.092 [2024-11-19 12:30:24.143478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.092 [2024-11-19 12:30:24.159449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.092 [2024-11-19 12:30:24.159496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.092 [2024-11-19 12:30:24.178158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.092 [2024-11-19 12:30:24.178207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.092 [2024-11-19 12:30:24.193316] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.092 [2024-11-19 12:30:24.193364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.092 [2024-11-19 12:30:24.209650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.092 [2024-11-19 12:30:24.209739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.092 [2024-11-19 12:30:24.227136] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.092 [2024-11-19 12:30:24.227186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.092 [2024-11-19 12:30:24.242130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.092 [2024-11-19 12:30:24.242178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.092 [2024-11-19 12:30:24.257008] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.092 [2024-11-19 12:30:24.257057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.092 [2024-11-19 12:30:24.266420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.092 [2024-11-19 12:30:24.266468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.092 [2024-11-19 12:30:24.281362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.092 [2024-11-19 12:30:24.281412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.092 [2024-11-19 12:30:24.297406] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.092 [2024-11-19 12:30:24.297454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.092 [2024-11-19 12:30:24.313893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.092 [2024-11-19 12:30:24.313940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.092 [2024-11-19 12:30:24.331137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.092 [2024-11-19 12:30:24.331188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.092 [2024-11-19 12:30:24.348513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.092 [2024-11-19 12:30:24.348562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.351 [2024-11-19 12:30:24.362859] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.351 [2024-11-19 12:30:24.362897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.351 [2024-11-19 12:30:24.378940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.351 [2024-11-19 12:30:24.378992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.351 [2024-11-19 12:30:24.394445] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.351 [2024-11-19 12:30:24.394494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.351 [2024-11-19 12:30:24.405963] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.351 [2024-11-19 12:30:24.406011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.351 [2024-11-19 12:30:24.421663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.351 [2024-11-19 12:30:24.421732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.351 [2024-11-19 12:30:24.438597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.351 [2024-11-19 12:30:24.438645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.351 [2024-11-19 12:30:24.454482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.351 [2024-11-19 12:30:24.454529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.351 [2024-11-19 12:30:24.464176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.351 [2024-11-19 12:30:24.464224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.351 [2024-11-19 12:30:24.479812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.351 [2024-11-19 12:30:24.479871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.351 [2024-11-19 12:30:24.495981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.351 [2024-11-19 12:30:24.496030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.351 [2024-11-19 12:30:24.511972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.351 [2024-11-19 12:30:24.512019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.351 [2024-11-19 12:30:24.527029] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.351 [2024-11-19 12:30:24.527065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.351 [2024-11-19 12:30:24.542115] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.351 [2024-11-19 12:30:24.542164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.351 [2024-11-19 12:30:24.559851] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.351 [2024-11-19 12:30:24.559884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.351 [2024-11-19 12:30:24.573913] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.351 [2024-11-19 12:30:24.573946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.351 [2024-11-19 12:30:24.589256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.351 [2024-11-19 12:30:24.589305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.351 [2024-11-19 12:30:24.606671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.351 [2024-11-19 12:30:24.606775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.610 [2024-11-19 12:30:24.621384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.610 [2024-11-19 12:30:24.621433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.610 [2024-11-19 12:30:24.637274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.610 [2024-11-19 12:30:24.637338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.610 [2024-11-19 12:30:24.655220] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.610 [2024-11-19 12:30:24.655267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.610 [2024-11-19 12:30:24.669601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.610 [2024-11-19 12:30:24.669649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.610 [2024-11-19 12:30:24.686212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.610 [2024-11-19 12:30:24.686260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.610 [2024-11-19 12:30:24.701935] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.610 [2024-11-19 12:30:24.701985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.610 [2024-11-19 12:30:24.711773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.610 [2024-11-19 12:30:24.711819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.610 [2024-11-19 12:30:24.725752] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.610 [2024-11-19 12:30:24.725800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.610 [2024-11-19 12:30:24.740829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.610 [2024-11-19 12:30:24.740879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.610 [2024-11-19 12:30:24.756773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.610 [2024-11-19 12:30:24.756822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.610 [2024-11-19 12:30:24.774867] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.610 [2024-11-19 12:30:24.774917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.610 12312.00 IOPS, 96.19 MiB/s [2024-11-19T12:30:24.870Z] [2024-11-19 12:30:24.788523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.610 [2024-11-19 12:30:24.788571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.610 [2024-11-19 12:30:24.803616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.610 [2024-11-19 12:30:24.803692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.610 [2024-11-19 12:30:24.819804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.610 [2024-11-19 12:30:24.819852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.611 [2024-11-19 12:30:24.835904] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.611 [2024-11-19 12:30:24.835950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.611 [2024-11-19 12:30:24.853999] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.611 [2024-11-19 12:30:24.854035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.869 [2024-11-19 12:30:24.869420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.869 [2024-11-19 12:30:24.869479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.869 [2024-11-19 12:30:24.887740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.869 [2024-11-19 12:30:24.887807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.869 [2024-11-19 12:30:24.902394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.869 [2024-11-19 12:30:24.902463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.869 [2024-11-19 12:30:24.919530] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.869 [2024-11-19 12:30:24.919577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.869 [2024-11-19 12:30:24.935703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.869 [2024-11-19 12:30:24.935761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.869 [2024-11-19 12:30:24.951816] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.869 [2024-11-19 12:30:24.951861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.869 [2024-11-19 12:30:24.961465] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.869 [2024-11-19 12:30:24.961513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.869 [2024-11-19 12:30:24.977037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.869 [2024-11-19 12:30:24.977101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.869 [2024-11-19 12:30:24.991946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.869 [2024-11-19 12:30:24.991995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.869 [2024-11-19 12:30:25.001174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.869 [2024-11-19 12:30:25.001224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.869 [2024-11-19 12:30:25.018383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.869 [2024-11-19 12:30:25.018431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.869 [2024-11-19 12:30:25.035560] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.869 [2024-11-19 12:30:25.035607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.869 [2024-11-19 12:30:25.052100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.869 [2024-11-19 12:30:25.052148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.869 [2024-11-19 12:30:25.068571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.869 [2024-11-19 12:30:25.068619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.869 [2024-11-19 12:30:25.086289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.869 [2024-11-19 12:30:25.086336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.869 [2024-11-19 12:30:25.102513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.869 [2024-11-19 12:30:25.102560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.869 [2024-11-19 12:30:25.119347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.869 [2024-11-19 12:30:25.119395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.128 [2024-11-19 12:30:25.134029] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.128 [2024-11-19 12:30:25.134093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.128 [2024-11-19 12:30:25.150512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.128 [2024-11-19 12:30:25.150561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.128 [2024-11-19 12:30:25.166628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.128 [2024-11-19 12:30:25.166719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.128 [2024-11-19 12:30:25.184259] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.128 [2024-11-19 12:30:25.184307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.128 [2024-11-19 12:30:25.200039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.128 [2024-11-19 12:30:25.200087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.128 [2024-11-19 12:30:25.218016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.128 [2024-11-19 12:30:25.218065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.128 [2024-11-19 12:30:25.233073] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.128 [2024-11-19 12:30:25.233122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.128 [2024-11-19 12:30:25.248657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.128 [2024-11-19 12:30:25.248737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.128 [2024-11-19 12:30:25.265267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.128 [2024-11-19 12:30:25.265315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.128 [2024-11-19 12:30:25.280949] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.128 [2024-11-19 12:30:25.280998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.128 [2024-11-19 12:30:25.290492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.128 [2024-11-19 12:30:25.290539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.128 [2024-11-19 12:30:25.305391] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.128 [2024-11-19 12:30:25.305454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.128 [2024-11-19 12:30:25.322626] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.128 [2024-11-19 12:30:25.322700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.128 [2024-11-19 12:30:25.338422] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.128 [2024-11-19 12:30:25.338470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.128 [2024-11-19 12:30:25.354631] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.128 [2024-11-19 12:30:25.354706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.128 [2024-11-19 12:30:25.371472] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.128 [2024-11-19 12:30:25.371521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.386 [2024-11-19 12:30:25.390532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.386 [2024-11-19 12:30:25.390582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.386 [2024-11-19 12:30:25.404354] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.386 [2024-11-19 12:30:25.404402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.386 [2024-11-19 12:30:25.419333] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.386 [2024-11-19 12:30:25.419381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.386 [2024-11-19 12:30:25.428952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.386 [2024-11-19 12:30:25.429000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.386 [2024-11-19 12:30:25.445204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.386 [2024-11-19 12:30:25.445252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.386 [2024-11-19 12:30:25.456040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.386 [2024-11-19 12:30:25.456104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.386 [2024-11-19 12:30:25.470871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.386 [2024-11-19 12:30:25.470907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.386 [2024-11-19 12:30:25.487456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.386 [2024-11-19 12:30:25.487503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.386 [2024-11-19 12:30:25.502750] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.386 [2024-11-19 12:30:25.502804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.387 [2024-11-19 12:30:25.511933] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.387 [2024-11-19 12:30:25.511967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.387 [2024-11-19 12:30:25.528050] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.387 [2024-11-19 12:30:25.528115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.387 [2024-11-19 12:30:25.545895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.387 [2024-11-19 12:30:25.545943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.387 [2024-11-19 12:30:25.560100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.387 [2024-11-19 12:30:25.560147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.387 [2024-11-19 12:30:25.577037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.387 [2024-11-19 12:30:25.577087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.387 [2024-11-19 12:30:25.593255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.387 [2024-11-19 12:30:25.593306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.387 [2024-11-19 12:30:25.614650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.387 [2024-11-19 12:30:25.614727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.387 [2024-11-19 12:30:25.629426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.387 [2024-11-19 12:30:25.629463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.645 [2024-11-19 12:30:25.645471] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.645 [2024-11-19 12:30:25.645525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.645 [2024-11-19 12:30:25.661435] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.645 [2024-11-19 12:30:25.661471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.645 [2024-11-19 12:30:25.680129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.645 [2024-11-19 12:30:25.680178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.645 [2024-11-19 12:30:25.694566] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.645 [2024-11-19 12:30:25.694614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.645 [2024-11-19 12:30:25.706464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.645 [2024-11-19 12:30:25.706512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.645 [2024-11-19 12:30:25.723012] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.645 [2024-11-19 12:30:25.723063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.645 [2024-11-19 12:30:25.738542] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.645 [2024-11-19 12:30:25.738590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.645 [2024-11-19 12:30:25.750445] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.645 [2024-11-19 12:30:25.750493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.645 [2024-11-19 12:30:25.766754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.645 [2024-11-19 12:30:25.766825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.645 [2024-11-19 12:30:25.783271] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.645 [2024-11-19 12:30:25.783334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.645 12249.50 IOPS, 95.70 MiB/s [2024-11-19T12:30:25.905Z] [2024-11-19 12:30:25.800244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.645 [2024-11-19 12:30:25.800294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.645 [2024-11-19 12:30:25.816116] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.645 [2024-11-19 12:30:25.816180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.645 [2024-11-19 12:30:25.834293] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.645 [2024-11-19 12:30:25.834343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.645 [2024-11-19 12:30:25.847935] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.645 [2024-11-19 12:30:25.847983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.645 [2024-11-19 12:30:25.864477] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.646 [2024-11-19 12:30:25.864525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.646 [2024-11-19 12:30:25.880798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.646 [2024-11-19 12:30:25.880846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.646 [2024-11-19 12:30:25.897669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.646 [2024-11-19 12:30:25.897764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.904 [2024-11-19 12:30:25.913208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.904 [2024-11-19 12:30:25.913261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.904 [2024-11-19 12:30:25.931545] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.904 [2024-11-19 12:30:25.931596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.904 [2024-11-19 12:30:25.946964] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.904 [2024-11-19 12:30:25.947001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.904 [2024-11-19 12:30:25.956500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.904 [2024-11-19 12:30:25.956550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.904 [2024-11-19 12:30:25.973319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.904 [2024-11-19 12:30:25.973369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.904 [2024-11-19 12:30:25.987891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.904 [2024-11-19 12:30:25.987956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.904 [2024-11-19 12:30:26.004565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.904 [2024-11-19 12:30:26.004626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.904 [2024-11-19 12:30:26.021068] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.904 [2024-11-19 12:30:26.021120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.904 [2024-11-19 12:30:26.037856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.904 [2024-11-19 12:30:26.037903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.904 [2024-11-19 12:30:26.054883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.904 [2024-11-19 12:30:26.054934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.904 [2024-11-19 12:30:26.070736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.904 [2024-11-19 12:30:26.070784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.904 [2024-11-19 12:30:26.080305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.904 [2024-11-19 12:30:26.080353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.904 [2024-11-19 12:30:26.096340] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.904 [2024-11-19 12:30:26.096389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.904 [2024-11-19 12:30:26.113282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.904 [2024-11-19 12:30:26.113331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.904 [2024-11-19 12:30:26.129235] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.904 [2024-11-19 12:30:26.129285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.904 [2024-11-19 12:30:26.147527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.904 [2024-11-19 12:30:26.147576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.163 [2024-11-19 12:30:26.163746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.163 [2024-11-19 12:30:26.163838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.163 [2024-11-19 12:30:26.180224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.163 [2024-11-19 12:30:26.180272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.163 [2024-11-19 12:30:26.197213] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.163 [2024-11-19 12:30:26.197261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.163 [2024-11-19 12:30:26.212466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.163 [2024-11-19 12:30:26.212515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.163 [2024-11-19 12:30:26.229203] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.163 [2024-11-19 12:30:26.229252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.163 [2024-11-19 12:30:26.245386] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.163 [2024-11-19 12:30:26.245434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.163 [2024-11-19 12:30:26.262541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.163 [2024-11-19 12:30:26.262618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.163 [2024-11-19 12:30:26.279165] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.163 [2024-11-19 12:30:26.279216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.163 [2024-11-19 12:30:26.295591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.163 [2024-11-19 12:30:26.295640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.163 [2024-11-19 12:30:26.314030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.163 [2024-11-19 12:30:26.314079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.163 [2024-11-19 12:30:26.328164] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.163 [2024-11-19 12:30:26.328213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.163 [2024-11-19 12:30:26.343968] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.163 [2024-11-19 12:30:26.344017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.163 [2024-11-19 12:30:26.362097] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.163 [2024-11-19 12:30:26.362147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.163 [2024-11-19 12:30:26.377965] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.163 [2024-11-19 12:30:26.378015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.163 [2024-11-19 12:30:26.393401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.163 [2024-11-19 12:30:26.393450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.163 [2024-11-19 12:30:26.412282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.163 [2024-11-19 12:30:26.412331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.422 [2024-11-19 12:30:26.427283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.422 [2024-11-19 12:30:26.427334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.422 [2024-11-19 12:30:26.437519] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.422 [2024-11-19 12:30:26.437568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.422 [2024-11-19 12:30:26.452566] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.422 [2024-11-19 12:30:26.452603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.422 [2024-11-19 12:30:26.471033] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.422 [2024-11-19 12:30:26.471072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.422 [2024-11-19 12:30:26.487096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.422 [2024-11-19 12:30:26.487176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.422 [2024-11-19 12:30:26.503932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.422 [2024-11-19 12:30:26.503969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.422 [2024-11-19 12:30:26.519623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.422 [2024-11-19 12:30:26.519720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.422 [2024-11-19 12:30:26.529201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.422 [2024-11-19 12:30:26.529237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.422 [2024-11-19 12:30:26.545992] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.422 [2024-11-19 12:30:26.546028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.422 [2024-11-19 12:30:26.561185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.422 [2024-11-19 12:30:26.561236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.422 [2024-11-19 12:30:26.577442] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.422 [2024-11-19 12:30:26.577490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.422 [2024-11-19 12:30:26.593922] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.422 [2024-11-19 12:30:26.593971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.422 [2024-11-19 12:30:26.610665] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.422 [2024-11-19 12:30:26.610762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.422 [2024-11-19 12:30:26.627003] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.422 [2024-11-19 12:30:26.627055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.422 [2024-11-19 12:30:26.642165] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.422 [2024-11-19 12:30:26.642215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.422 [2024-11-19 12:30:26.659420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.422 [2024-11-19 12:30:26.659469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.422 [2024-11-19 12:30:26.673553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.422 [2024-11-19 12:30:26.673602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.680 [2024-11-19 12:30:26.689521] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.680 [2024-11-19 12:30:26.689573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.680 [2024-11-19 12:30:26.707799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.680 [2024-11-19 12:30:26.707848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.680 [2024-11-19 12:30:26.722331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.680 [2024-11-19 12:30:26.722380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.680 [2024-11-19 12:30:26.738088] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.680 [2024-11-19 12:30:26.738137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.680 [2024-11-19 12:30:26.754303] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.680 [2024-11-19 12:30:26.754351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.680 [2024-11-19 12:30:26.772297] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.680 [2024-11-19 12:30:26.772346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.680 12116.67 IOPS, 94.66 MiB/s [2024-11-19T12:30:26.940Z] [2024-11-19 12:30:26.787947] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.680 [2024-11-19 12:30:26.787994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.680 [2024-11-19 12:30:26.805003] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.680 [2024-11-19 12:30:26.805040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.680 [2024-11-19 12:30:26.820791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.680 [2024-11-19 12:30:26.820838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.680 [2024-11-19 12:30:26.837801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.680 [2024-11-19 12:30:26.837847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.680 [2024-11-19 12:30:26.853566] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.680 [2024-11-19 12:30:26.853614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.680 [2024-11-19 12:30:26.863534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.680 [2024-11-19 12:30:26.863580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.680 [2024-11-19 12:30:26.879011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.680 [2024-11-19 12:30:26.879049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.680 [2024-11-19 12:30:26.889377] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.680 [2024-11-19 12:30:26.889428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.680 [2024-11-19 12:30:26.904354] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.680 [2024-11-19 12:30:26.904404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.680 [2024-11-19 12:30:26.913569] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.680 [2024-11-19 12:30:26.913601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.680 [2024-11-19 12:30:26.929991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.680 [2024-11-19 12:30:26.930021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.938 [2024-11-19 12:30:26.945838] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.938 [2024-11-19 12:30:26.945888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.938 [2024-11-19 12:30:26.963373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.938 [2024-11-19 12:30:26.963423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.938 [2024-11-19 12:30:26.978105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.938 [2024-11-19 12:30:26.978158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.938 [2024-11-19 12:30:26.994644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.938 [2024-11-19 12:30:26.994698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.938 [2024-11-19 12:30:27.011527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.938 [2024-11-19 12:30:27.011576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.938 [2024-11-19 12:30:27.029180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.938 [2024-11-19 12:30:27.029217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.938 [2024-11-19 12:30:27.043848] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.938 [2024-11-19 12:30:27.043897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.938 [2024-11-19 12:30:27.059549] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.938 [2024-11-19 12:30:27.059597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.938 [2024-11-19 12:30:27.077731] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.938 [2024-11-19 12:30:27.077789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.938 [2024-11-19 12:30:27.093170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.938 [2024-11-19 12:30:27.093208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.938 [2024-11-19 12:30:27.110604] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.938 [2024-11-19 12:30:27.110654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.938 [2024-11-19 12:30:27.126522] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.938 [2024-11-19 12:30:27.126571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.938 [2024-11-19 12:30:27.143584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.938 [2024-11-19 12:30:27.143632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.938 [2024-11-19 12:30:27.159318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.938 [2024-11-19 12:30:27.159368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.938 [2024-11-19 12:30:27.175747] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.938 [2024-11-19 12:30:27.175797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.938 [2024-11-19 12:30:27.191955] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.938 [2024-11-19 12:30:27.191993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.195 [2024-11-19 12:30:27.210133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.195 [2024-11-19 12:30:27.210172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.195 [2024-11-19 12:30:27.224153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.195 [2024-11-19 12:30:27.224202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.195 [2024-11-19 12:30:27.239565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.195 [2024-11-19 12:30:27.239613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.195 [2024-11-19 12:30:27.248829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.196 [2024-11-19 12:30:27.248876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.196 [2024-11-19 12:30:27.264161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.196 [2024-11-19 12:30:27.264210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.196 [2024-11-19 12:30:27.275993] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.196 [2024-11-19 12:30:27.276043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.196 [2024-11-19 12:30:27.292752] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.196 [2024-11-19 12:30:27.292802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.196 [2024-11-19 12:30:27.308865] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.196 [2024-11-19 12:30:27.308912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.196 [2024-11-19 12:30:27.327216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.196 [2024-11-19 12:30:27.327267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.196 [2024-11-19 12:30:27.341428] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.196 [2024-11-19 12:30:27.341477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.196 [2024-11-19 12:30:27.357179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.196 [2024-11-19 12:30:27.357228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.196 [2024-11-19 12:30:27.375329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.196 [2024-11-19 12:30:27.375376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.196 [2024-11-19 12:30:27.390369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.196 [2024-11-19 12:30:27.390417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.196 [2024-11-19 12:30:27.409862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.196 [2024-11-19 12:30:27.409928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.196 [2024-11-19 12:30:27.425147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.196 [2024-11-19 12:30:27.425198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.196 [2024-11-19 12:30:27.441627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.196 [2024-11-19 12:30:27.441699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.453 [2024-11-19 12:30:27.459623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.453 [2024-11-19 12:30:27.459701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.453 [2024-11-19 12:30:27.475875] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.453 [2024-11-19 12:30:27.475912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.453 [2024-11-19 12:30:27.491587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.453 [2024-11-19 12:30:27.491635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.453 [2024-11-19 12:30:27.510324] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.453 [2024-11-19 12:30:27.510372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.453 [2024-11-19 12:30:27.525416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.453 [2024-11-19 12:30:27.525465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.453 [2024-11-19 12:30:27.535870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.453 [2024-11-19 12:30:27.535905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.453 [2024-11-19 12:30:27.550466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.453 [2024-11-19 12:30:27.550514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.453 [2024-11-19 12:30:27.565802] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.453 [2024-11-19 12:30:27.565850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.453 [2024-11-19 12:30:27.575315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.453 [2024-11-19 12:30:27.575364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.453 [2024-11-19 12:30:27.590715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.453 [2024-11-19 12:30:27.590763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.453 [2024-11-19 12:30:27.607955] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.453 [2024-11-19 12:30:27.608000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.453 [2024-11-19 12:30:27.622671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.453 [2024-11-19 12:30:27.622745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.454 [2024-11-19 12:30:27.637853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.454 [2024-11-19 12:30:27.637901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.454 [2024-11-19 12:30:27.655196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.454 [2024-11-19 12:30:27.655243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.454 [2024-11-19 12:30:27.670427] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.454 [2024-11-19 12:30:27.670477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.454 [2024-11-19 12:30:27.685540] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.454 [2024-11-19 12:30:27.685588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.454 [2024-11-19 12:30:27.697582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.454 [2024-11-19 12:30:27.697632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.712 [2024-11-19 12:30:27.714132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.712 [2024-11-19 12:30:27.714178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.712 [2024-11-19 12:30:27.729255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.712 [2024-11-19 12:30:27.729334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.712 [2024-11-19 12:30:27.746232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.712 [2024-11-19 12:30:27.746280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.712 [2024-11-19 12:30:27.762846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.712 [2024-11-19 12:30:27.762884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.712 [2024-11-19 12:30:27.779166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.712 [2024-11-19 12:30:27.779231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.712 12011.75 IOPS, 93.84 MiB/s [2024-11-19T12:30:27.972Z] [2024-11-19 12:30:27.795998] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.712 [2024-11-19 12:30:27.796048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.712 [2024-11-19 12:30:27.812652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.712 [2024-11-19 12:30:27.812717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.712 [2024-11-19 12:30:27.828484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.712 [2024-11-19 12:30:27.828532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.712 [2024-11-19 12:30:27.845083] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.712 [2024-11-19 12:30:27.845131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.712 [2024-11-19 12:30:27.863379] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.712 [2024-11-19 12:30:27.863426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.712 [2024-11-19 12:30:27.878013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.712 [2024-11-19 12:30:27.878076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.712 [2024-11-19 12:30:27.893301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.712 [2024-11-19 12:30:27.893349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.712 [2024-11-19 12:30:27.902353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.712 [2024-11-19 12:30:27.902401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.712 [2024-11-19 12:30:27.918127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.712 [2024-11-19 12:30:27.918176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.712 [2024-11-19 12:30:27.933690] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.712 [2024-11-19 12:30:27.933739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.712 [2024-11-19 12:30:27.943876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.712 [2024-11-19 12:30:27.943924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.712 [2024-11-19 12:30:27.958097] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.712 [2024-11-19 12:30:27.958146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.970 [2024-11-19 12:30:27.974380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.970 [2024-11-19 12:30:27.974430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.970 [2024-11-19 12:30:27.990993] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.970 [2024-11-19 12:30:27.991045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.970 [2024-11-19 12:30:28.006967] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.970 [2024-11-19 12:30:28.007004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.970 [2024-11-19 12:30:28.023911] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.970 [2024-11-19 12:30:28.023980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.970 [2024-11-19 12:30:28.040739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.970 [2024-11-19 12:30:28.040797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.970 [2024-11-19 12:30:28.057207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.970 [2024-11-19 12:30:28.057275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.970 [2024-11-19 12:30:28.074103] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.970 [2024-11-19 12:30:28.074139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.970 [2024-11-19 12:30:28.090575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.970 [2024-11-19 12:30:28.090624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.970 [2024-11-19 12:30:28.107763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.970 [2024-11-19 12:30:28.107812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.970 [2024-11-19 12:30:28.124279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.970 [2024-11-19 12:30:28.124329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.970 [2024-11-19 12:30:28.142788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.970 [2024-11-19 12:30:28.142864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.970 [2024-11-19 12:30:28.156976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.970 [2024-11-19 12:30:28.157025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.970 [2024-11-19 12:30:28.172696] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.970 [2024-11-19 12:30:28.172754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.970 [2024-11-19 12:30:28.190565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.970 [2024-11-19 12:30:28.190617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.970 [2024-11-19 12:30:28.204939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.970 [2024-11-19 12:30:28.204988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.970 [2024-11-19 12:30:28.220579] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.970 [2024-11-19 12:30:28.220626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.229 [2024-11-19 12:30:28.238893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.229 [2024-11-19 12:30:28.238946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.229 [2024-11-19 12:30:28.255248] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.229 [2024-11-19 12:30:28.255297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.229 [2024-11-19 12:30:28.274376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.229 [2024-11-19 12:30:28.274424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.229 [2024-11-19 12:30:28.288552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.229 [2024-11-19 12:30:28.288601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.229 [2024-11-19 12:30:28.304201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.229 [2024-11-19 12:30:28.304277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.229 [2024-11-19 12:30:28.321490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.229 [2024-11-19 12:30:28.321568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.229 [2024-11-19 12:30:28.336101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.229 [2024-11-19 12:30:28.336167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.229 [2024-11-19 12:30:28.351500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.229 [2024-11-19 12:30:28.351570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.229 [2024-11-19 12:30:28.368020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.229 [2024-11-19 12:30:28.368118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.229 [2024-11-19 12:30:28.385008] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.229 [2024-11-19 12:30:28.385065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.229 [2024-11-19 12:30:28.401775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.229 [2024-11-19 12:30:28.401846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.229 [2024-11-19 12:30:28.418333] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.229 [2024-11-19 12:30:28.418395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.229 [2024-11-19 12:30:28.436443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.229 [2024-11-19 12:30:28.436568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.229 [2024-11-19 12:30:28.451074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.229 [2024-11-19 12:30:28.451145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.229 [2024-11-19 12:30:28.467024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.229 [2024-11-19 12:30:28.467108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.229 [2024-11-19 12:30:28.484600] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.229 [2024-11-19 12:30:28.484717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.487 [2024-11-19 12:30:28.500804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.487 [2024-11-19 12:30:28.500868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.487 [2024-11-19 12:30:28.517975] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.487 [2024-11-19 12:30:28.518019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.487 [2024-11-19 12:30:28.534171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.487 [2024-11-19 12:30:28.534231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.487 [2024-11-19 12:30:28.552589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.487 [2024-11-19 12:30:28.552660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.487 [2024-11-19 12:30:28.567035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.487 [2024-11-19 12:30:28.567072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.487 [2024-11-19 12:30:28.584141] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.487 [2024-11-19 12:30:28.584188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.487 [2024-11-19 12:30:28.599656] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.487 [2024-11-19 12:30:28.599728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.487 [2024-11-19 12:30:28.616017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.487 [2024-11-19 12:30:28.616063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.487 [2024-11-19 12:30:28.631566] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.487 [2024-11-19 12:30:28.631613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.487 [2024-11-19 12:30:28.641008] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.487 [2024-11-19 12:30:28.641072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.487 [2024-11-19 12:30:28.656160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.487 [2024-11-19 12:30:28.656207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.487 [2024-11-19 12:30:28.672913] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.487 [2024-11-19 12:30:28.672961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.487 [2024-11-19 12:30:28.688494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.487 [2024-11-19 12:30:28.688542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.487 [2024-11-19 12:30:28.698710] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.487 [2024-11-19 12:30:28.698789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.487 [2024-11-19 12:30:28.713004] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.487 [2024-11-19 12:30:28.713052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.487 [2024-11-19 12:30:28.729148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.487 [2024-11-19 12:30:28.729195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.746 [2024-11-19 12:30:28.746794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.746 [2024-11-19 12:30:28.746856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.746 [2024-11-19 12:30:28.762489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.746 [2024-11-19 12:30:28.762538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.746 [2024-11-19 12:30:28.780265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.746 [2024-11-19 12:30:28.780314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.746 12032.20 IOPS, 94.00 MiB/s [2024-11-19T12:30:29.006Z] [2024-11-19 12:30:28.791391] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.746 [2024-11-19 12:30:28.791436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.746 00:10:23.746 Latency(us) 00:10:23.746 [2024-11-19T12:30:29.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:23.746 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:23.746 Nvme1n1 : 5.01 12040.12 94.06 0.00 0.00 10620.11 3783.21 20018.27 00:10:23.746 [2024-11-19T12:30:29.006Z] =================================================================================================================== 00:10:23.746 [2024-11-19T12:30:29.006Z] Total : 12040.12 94.06 0.00 0.00 10620.11 3783.21 20018.27 00:10:23.746 [2024-11-19 12:30:28.803386] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.746 [2024-11-19 12:30:28.803431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.746 [2024-11-19 12:30:28.815420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.746 [2024-11-19 12:30:28.815478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.746 [2024-11-19 12:30:28.827413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.746 [2024-11-19 12:30:28.827472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.746 [2024-11-19 12:30:28.839435] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.746 [2024-11-19 12:30:28.839489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.746 [2024-11-19 12:30:28.851432] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.746 [2024-11-19 12:30:28.851485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.746 [2024-11-19 12:30:28.867424] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.746 [2024-11-19 12:30:28.867476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.746 [2024-11-19 12:30:28.879439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.746 [2024-11-19 12:30:28.879470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.746 [2024-11-19 12:30:28.891438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.746 [2024-11-19 12:30:28.891493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.746 [2024-11-19 12:30:28.903425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.746 [2024-11-19 12:30:28.903470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.746 [2024-11-19 12:30:28.915440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.746 [2024-11-19 12:30:28.915490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.746 [2024-11-19 12:30:28.927418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.746 [2024-11-19 12:30:28.927458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.746 [2024-11-19 12:30:28.935418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.746 [2024-11-19 12:30:28.935456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.746 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (78357) - No such process 00:10:23.746 12:30:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 78357 00:10:23.746 12:30:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.746 12:30:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.746 12:30:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:23.746 12:30:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.746 12:30:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:23.746 12:30:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.746 12:30:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:23.746 delay0 00:10:23.746 12:30:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.746 12:30:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:23.746 12:30:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.746 12:30:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:23.746 12:30:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.746 12:30:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:10:24.004 [2024-11-19 12:30:29.119199] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:30.623 Initializing NVMe Controllers 00:10:30.623 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:10:30.623 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:30.623 Initialization complete. Launching workers. 00:10:30.623 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 87 00:10:30.623 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 374, failed to submit 33 00:10:30.623 success 255, unsuccessful 119, failed 0 00:10:30.623 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:30.623 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:30.623 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:30.623 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:30.623 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:30.623 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:30.623 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:30.623 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:30.623 rmmod nvme_tcp 00:10:30.623 rmmod nvme_fabrics 00:10:30.623 rmmod nvme_keyring 00:10:30.623 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:30.623 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:30.623 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:30.623 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@513 -- # '[' -n 78214 ']' 00:10:30.623 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # killprocess 78214 00:10:30.623 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 78214 ']' 00:10:30.623 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 78214 00:10:30.624 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:10:30.624 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:30.624 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78214 00:10:30.624 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:30.624 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:30.624 killing process with pid 78214 00:10:30.624 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78214' 00:10:30.624 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 78214 00:10:30.624 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 78214 00:10:30.624 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:30.624 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:30.624 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:30.624 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:30.624 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-save 00:10:30.624 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-restore 00:10:30.624 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:30.624 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:30.624 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:30.624 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:30.624 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:30.624 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:30.624 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:30.624 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:30.624 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:30.624 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:30.624 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:30.624 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:30.624 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:30.624 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:30.624 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:30.624 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:30.624 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:30.624 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.624 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:30.624 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.624 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:10:30.624 00:10:30.624 real 0m23.924s 00:10:30.624 user 0m39.061s 00:10:30.624 sys 0m6.658s 00:10:30.624 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:30.624 ************************************ 00:10:30.624 END TEST nvmf_zcopy 00:10:30.624 ************************************ 00:10:30.624 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:30.624 12:30:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:30.624 12:30:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:30.624 12:30:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:30.624 12:30:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:30.624 ************************************ 00:10:30.624 START TEST nvmf_nmic 00:10:30.624 ************************************ 00:10:30.624 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:30.624 * Looking for test storage... 00:10:30.624 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:30.624 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:30.624 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:30.624 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:10:30.883 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:30.883 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:30.883 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:30.883 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:30.883 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:30.883 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:30.883 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:30.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.884 --rc genhtml_branch_coverage=1 00:10:30.884 --rc genhtml_function_coverage=1 00:10:30.884 --rc genhtml_legend=1 00:10:30.884 --rc geninfo_all_blocks=1 00:10:30.884 --rc geninfo_unexecuted_blocks=1 00:10:30.884 00:10:30.884 ' 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:30.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.884 --rc genhtml_branch_coverage=1 00:10:30.884 --rc genhtml_function_coverage=1 00:10:30.884 --rc genhtml_legend=1 00:10:30.884 --rc geninfo_all_blocks=1 00:10:30.884 --rc geninfo_unexecuted_blocks=1 00:10:30.884 00:10:30.884 ' 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:30.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.884 --rc genhtml_branch_coverage=1 00:10:30.884 --rc genhtml_function_coverage=1 00:10:30.884 --rc genhtml_legend=1 00:10:30.884 --rc geninfo_all_blocks=1 00:10:30.884 --rc geninfo_unexecuted_blocks=1 00:10:30.884 00:10:30.884 ' 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:30.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.884 --rc genhtml_branch_coverage=1 00:10:30.884 --rc genhtml_function_coverage=1 00:10:30.884 --rc genhtml_legend=1 00:10:30.884 --rc geninfo_all_blocks=1 00:10:30.884 --rc geninfo_unexecuted_blocks=1 00:10:30.884 00:10:30.884 ' 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:30.884 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:30.884 12:30:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:30.884 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:30.884 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:30.884 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:30.884 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:30.884 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:30.884 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:30.884 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:30.884 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:30.884 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.884 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:30.884 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.884 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:30.884 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:30.884 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:30.884 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:30.884 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:30.885 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:30.885 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:30.885 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:30.885 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:30.885 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:30.885 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:30.885 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:30.885 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:30.885 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:30.885 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:30.885 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:30.885 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:30.885 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:30.885 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:30.885 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:30.885 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:30.885 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:30.885 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:30.885 Cannot find device "nvmf_init_br" 00:10:30.885 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:10:30.885 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:30.885 Cannot find device "nvmf_init_br2" 00:10:30.885 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:10:30.885 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:30.885 Cannot find device "nvmf_tgt_br" 00:10:30.885 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:10:30.885 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:30.885 Cannot find device "nvmf_tgt_br2" 00:10:30.885 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:10:30.885 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:30.885 Cannot find device "nvmf_init_br" 00:10:30.885 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:10:30.885 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:30.885 Cannot find device "nvmf_init_br2" 00:10:30.885 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:10:30.885 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:30.885 Cannot find device "nvmf_tgt_br" 00:10:30.885 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:10:30.885 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:30.885 Cannot find device "nvmf_tgt_br2" 00:10:30.885 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:10:30.885 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:30.885 Cannot find device "nvmf_br" 00:10:30.885 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:10:30.885 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:30.885 Cannot find device "nvmf_init_if" 00:10:30.885 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:10:30.885 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:31.142 Cannot find device "nvmf_init_if2" 00:10:31.142 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:10:31.142 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:31.142 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:31.142 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:10:31.142 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:31.142 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:31.142 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:10:31.142 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:31.142 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:31.142 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:31.142 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:31.142 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:31.142 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:31.142 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:31.142 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:31.142 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:31.142 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:31.142 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:31.143 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:31.143 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:31.143 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:31.143 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:31.143 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:31.143 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:31.143 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:31.143 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:31.143 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:31.143 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:31.143 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:31.143 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:31.143 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:31.143 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:31.143 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:31.143 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:31.143 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:31.143 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:31.143 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:31.143 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:31.143 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:31.143 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:31.143 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:31.143 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:10:31.143 00:10:31.143 --- 10.0.0.3 ping statistics --- 00:10:31.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.143 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:10:31.143 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:31.143 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:31.143 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.099 ms 00:10:31.143 00:10:31.143 --- 10.0.0.4 ping statistics --- 00:10:31.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.143 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:10:31.143 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:31.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:31.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:10:31.143 00:10:31.143 --- 10.0.0.1 ping statistics --- 00:10:31.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.143 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:10:31.143 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:31.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:31.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:10:31.143 00:10:31.143 --- 10.0.0.2 ping statistics --- 00:10:31.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.143 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:10:31.143 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:31.143 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@457 -- # return 0 00:10:31.143 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:31.143 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:31.143 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:31.143 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:31.143 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:31.143 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:31.143 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:31.143 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:31.143 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:31.143 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:31.143 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:31.143 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # nvmfpid=78742 00:10:31.143 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:31.143 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # waitforlisten 78742 00:10:31.143 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 78742 ']' 00:10:31.143 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.143 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:31.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.143 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.143 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:31.143 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:31.401 [2024-11-19 12:30:36.455340] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:31.401 [2024-11-19 12:30:36.455437] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:31.401 [2024-11-19 12:30:36.599315] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:31.401 [2024-11-19 12:30:36.644555] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:31.401 [2024-11-19 12:30:36.644621] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:31.401 [2024-11-19 12:30:36.644643] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:31.401 [2024-11-19 12:30:36.644653] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:31.401 [2024-11-19 12:30:36.644661] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:31.401 [2024-11-19 12:30:36.645428] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:31.401 [2024-11-19 12:30:36.645630] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:31.401 [2024-11-19 12:30:36.645763] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:31.401 [2024-11-19 12:30:36.645771] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.660 [2024-11-19 12:30:36.681203] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:31.660 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:31.660 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:10:31.660 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:31.660 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:31.660 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:31.660 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:31.660 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:31.660 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.660 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:31.660 [2024-11-19 12:30:36.789322] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:31.660 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.660 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:31.660 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.660 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:31.660 Malloc0 00:10:31.660 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.660 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:31.660 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.660 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:31.660 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.660 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:31.660 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.660 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:31.660 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.660 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:31.660 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.660 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:31.660 [2024-11-19 12:30:36.838203] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:31.660 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.660 test case1: single bdev can't be used in multiple subsystems 00:10:31.660 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:31.660 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:31.660 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.660 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:31.660 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.660 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:10:31.660 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.660 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:31.660 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.660 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:31.660 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:31.660 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.660 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:31.660 [2024-11-19 12:30:36.862058] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:31.660 [2024-11-19 12:30:36.862103] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:31.660 [2024-11-19 12:30:36.862117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.660 request: 00:10:31.660 { 00:10:31.660 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:31.660 "namespace": { 00:10:31.660 "bdev_name": "Malloc0", 00:10:31.660 "no_auto_visible": false 00:10:31.660 }, 00:10:31.660 "method": "nvmf_subsystem_add_ns", 00:10:31.660 "req_id": 1 00:10:31.660 } 00:10:31.660 Got JSON-RPC error response 00:10:31.660 response: 00:10:31.660 { 00:10:31.660 "code": -32602, 00:10:31.660 "message": "Invalid parameters" 00:10:31.660 } 00:10:31.660 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:31.660 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:31.660 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:31.660 Adding namespace failed - expected result. 00:10:31.660 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:31.660 test case2: host connect to nvmf target in multiple paths 00:10:31.660 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:31.660 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:10:31.660 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.660 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:31.661 [2024-11-19 12:30:36.874166] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:10:31.661 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.661 12:30:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid=bae1b18f-cc14-461e-aa63-e888be1a2cc9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:31.918 12:30:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid=bae1b18f-cc14-461e-aa63-e888be1a2cc9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:10:31.918 12:30:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:31.918 12:30:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:31.918 12:30:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:31.918 12:30:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:31.918 12:30:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:34.446 12:30:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:34.446 12:30:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:34.446 12:30:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:34.446 12:30:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:34.446 12:30:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:34.446 12:30:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:34.446 12:30:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:34.446 [global] 00:10:34.446 thread=1 00:10:34.446 invalidate=1 00:10:34.446 rw=write 00:10:34.446 time_based=1 00:10:34.446 runtime=1 00:10:34.446 ioengine=libaio 00:10:34.446 direct=1 00:10:34.446 bs=4096 00:10:34.446 iodepth=1 00:10:34.446 norandommap=0 00:10:34.446 numjobs=1 00:10:34.446 00:10:34.446 verify_dump=1 00:10:34.446 verify_backlog=512 00:10:34.446 verify_state_save=0 00:10:34.446 do_verify=1 00:10:34.446 verify=crc32c-intel 00:10:34.446 [job0] 00:10:34.446 filename=/dev/nvme0n1 00:10:34.446 Could not set queue depth (nvme0n1) 00:10:34.446 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:34.446 fio-3.35 00:10:34.446 Starting 1 thread 00:10:35.379 00:10:35.379 job0: (groupid=0, jobs=1): err= 0: pid=78822: Tue Nov 19 12:30:40 2024 00:10:35.379 read: IOPS=2892, BW=11.3MiB/s (11.8MB/s)(11.3MiB/1001msec) 00:10:35.379 slat (nsec): min=11561, max=64066, avg=14671.17, stdev=5202.55 00:10:35.379 clat (usec): min=131, max=375, avg=184.55, stdev=29.87 00:10:35.379 lat (usec): min=144, max=390, avg=199.23, stdev=30.96 00:10:35.379 clat percentiles (usec): 00:10:35.379 | 1.00th=[ 141], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 161], 00:10:35.379 | 30.00th=[ 167], 40.00th=[ 174], 50.00th=[ 180], 60.00th=[ 186], 00:10:35.380 | 70.00th=[ 194], 80.00th=[ 204], 90.00th=[ 221], 95.00th=[ 245], 00:10:35.380 | 99.00th=[ 285], 99.50th=[ 306], 99.90th=[ 330], 99.95th=[ 343], 00:10:35.380 | 99.99th=[ 375] 00:10:35.380 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:35.380 slat (usec): min=16, max=100, avg=21.95, stdev= 7.06 00:10:35.380 clat (usec): min=79, max=683, avg=112.53, stdev=24.11 00:10:35.380 lat (usec): min=96, max=714, avg=134.47, stdev=25.72 00:10:35.380 clat percentiles (usec): 00:10:35.380 | 1.00th=[ 83], 5.00th=[ 88], 10.00th=[ 91], 20.00th=[ 96], 00:10:35.380 | 30.00th=[ 99], 40.00th=[ 103], 50.00th=[ 108], 60.00th=[ 113], 00:10:35.380 | 70.00th=[ 120], 80.00th=[ 128], 90.00th=[ 141], 95.00th=[ 155], 00:10:35.380 | 99.00th=[ 188], 99.50th=[ 198], 99.90th=[ 239], 99.95th=[ 260], 00:10:35.380 | 99.99th=[ 685] 00:10:35.380 bw ( KiB/s): min=12263, max=12263, per=99.90%, avg=12263.00, stdev= 0.00, samples=1 00:10:35.380 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:10:35.380 lat (usec) : 100=17.03%, 250=80.79%, 500=2.16%, 750=0.02% 00:10:35.380 cpu : usr=2.20%, sys=8.60%, ctx=5967, majf=0, minf=5 00:10:35.380 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:35.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.380 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.380 issued rwts: total=2895,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.380 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:35.380 00:10:35.380 Run status group 0 (all jobs): 00:10:35.380 READ: bw=11.3MiB/s (11.8MB/s), 11.3MiB/s-11.3MiB/s (11.8MB/s-11.8MB/s), io=11.3MiB (11.9MB), run=1001-1001msec 00:10:35.380 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:10:35.380 00:10:35.380 Disk stats (read/write): 00:10:35.380 nvme0n1: ios=2610/2782, merge=0/0, ticks=534/358, in_queue=892, util=91.18% 00:10:35.380 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:35.380 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:35.380 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:35.380 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:35.380 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:35.380 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:35.380 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:35.380 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:35.380 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:35.380 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:35.380 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:35.380 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:35.380 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:35.380 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:35.380 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:35.380 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:35.380 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:35.380 rmmod nvme_tcp 00:10:35.380 rmmod nvme_fabrics 00:10:35.380 rmmod nvme_keyring 00:10:35.638 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:35.638 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:35.638 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:35.638 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@513 -- # '[' -n 78742 ']' 00:10:35.638 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # killprocess 78742 00:10:35.638 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 78742 ']' 00:10:35.638 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 78742 00:10:35.638 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:10:35.638 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:35.638 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78742 00:10:35.638 killing process with pid 78742 00:10:35.638 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:35.638 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:35.638 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78742' 00:10:35.638 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 78742 00:10:35.638 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 78742 00:10:35.638 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:35.638 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:35.638 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:35.638 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:35.638 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-save 00:10:35.638 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:35.638 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-restore 00:10:35.638 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:35.638 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:35.638 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:35.639 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:35.639 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:35.898 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:35.898 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:35.898 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:35.898 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:35.898 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:35.898 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:35.898 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:35.898 12:30:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:35.898 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:35.898 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:35.898 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:35.898 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.898 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.898 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.898 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:10:35.898 ************************************ 00:10:35.898 END TEST nvmf_nmic 00:10:35.898 ************************************ 00:10:35.898 00:10:35.898 real 0m5.291s 00:10:35.898 user 0m15.520s 00:10:35.898 sys 0m2.301s 00:10:35.898 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:35.898 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:35.898 12:30:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:35.898 12:30:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:35.898 12:30:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:35.898 12:30:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:35.898 ************************************ 00:10:35.898 START TEST nvmf_fio_target 00:10:35.898 ************************************ 00:10:35.898 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:36.157 * Looking for test storage... 00:10:36.157 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:36.157 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:36.157 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:10:36.157 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:36.157 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:36.157 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:36.157 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:36.157 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:36.157 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:36.157 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:36.157 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:36.157 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:36.157 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:36.157 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:36.157 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:36.157 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:36.157 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:36.157 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:36.157 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:36.157 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:36.157 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:36.157 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:36.157 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:36.157 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:36.157 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:36.157 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:36.157 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:36.157 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:36.157 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:36.157 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:36.157 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:36.157 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:36.157 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:36.157 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:36.157 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:36.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.157 --rc genhtml_branch_coverage=1 00:10:36.157 --rc genhtml_function_coverage=1 00:10:36.157 --rc genhtml_legend=1 00:10:36.157 --rc geninfo_all_blocks=1 00:10:36.157 --rc geninfo_unexecuted_blocks=1 00:10:36.157 00:10:36.157 ' 00:10:36.157 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:36.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.157 --rc genhtml_branch_coverage=1 00:10:36.157 --rc genhtml_function_coverage=1 00:10:36.157 --rc genhtml_legend=1 00:10:36.157 --rc geninfo_all_blocks=1 00:10:36.157 --rc geninfo_unexecuted_blocks=1 00:10:36.157 00:10:36.157 ' 00:10:36.157 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:36.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.157 --rc genhtml_branch_coverage=1 00:10:36.157 --rc genhtml_function_coverage=1 00:10:36.157 --rc genhtml_legend=1 00:10:36.157 --rc geninfo_all_blocks=1 00:10:36.157 --rc geninfo_unexecuted_blocks=1 00:10:36.157 00:10:36.157 ' 00:10:36.157 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:36.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.157 --rc genhtml_branch_coverage=1 00:10:36.157 --rc genhtml_function_coverage=1 00:10:36.157 --rc genhtml_legend=1 00:10:36.157 --rc geninfo_all_blocks=1 00:10:36.157 --rc geninfo_unexecuted_blocks=1 00:10:36.157 00:10:36.157 ' 00:10:36.157 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:36.157 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:36.157 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:36.157 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:36.157 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:36.157 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:36.157 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:36.157 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:36.158 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:36.158 Cannot find device "nvmf_init_br" 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:36.158 Cannot find device "nvmf_init_br2" 00:10:36.158 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:10:36.159 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:36.159 Cannot find device "nvmf_tgt_br" 00:10:36.159 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:10:36.159 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:36.159 Cannot find device "nvmf_tgt_br2" 00:10:36.159 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:10:36.159 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:36.417 Cannot find device "nvmf_init_br" 00:10:36.417 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:10:36.417 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:36.417 Cannot find device "nvmf_init_br2" 00:10:36.417 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:10:36.417 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:36.417 Cannot find device "nvmf_tgt_br" 00:10:36.417 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:10:36.417 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:36.417 Cannot find device "nvmf_tgt_br2" 00:10:36.417 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:10:36.417 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:36.417 Cannot find device "nvmf_br" 00:10:36.417 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:10:36.417 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:36.417 Cannot find device "nvmf_init_if" 00:10:36.417 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:10:36.417 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:36.417 Cannot find device "nvmf_init_if2" 00:10:36.417 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:10:36.417 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:36.417 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:36.417 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:10:36.417 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:36.417 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:36.417 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:10:36.417 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:36.417 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:36.417 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:36.417 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:36.417 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:36.417 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:36.417 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:36.417 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:36.417 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:36.417 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:36.417 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:36.417 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:36.417 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:36.417 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:36.417 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:36.417 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:36.417 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:36.417 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:36.417 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:36.417 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:36.417 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:36.417 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:36.417 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:36.676 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:36.676 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:36.676 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:36.676 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:36.676 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:36.676 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:36.676 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:36.676 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:36.676 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:36.676 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:36.676 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:36.676 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.101 ms 00:10:36.676 00:10:36.676 --- 10.0.0.3 ping statistics --- 00:10:36.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.676 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:10:36.676 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:36.676 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:36.676 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:10:36.676 00:10:36.676 --- 10.0.0.4 ping statistics --- 00:10:36.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.676 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:10:36.676 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:36.676 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:36.676 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:10:36.676 00:10:36.676 --- 10.0.0.1 ping statistics --- 00:10:36.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.676 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:10:36.676 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:36.676 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:36.676 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:10:36.676 00:10:36.676 --- 10.0.0.2 ping statistics --- 00:10:36.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:36.676 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:10:36.676 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:36.676 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@457 -- # return 0 00:10:36.676 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:36.676 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:36.676 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:36.676 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:36.676 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:36.676 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:36.676 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:36.676 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:36.676 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:36.676 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:36.676 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.676 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # nvmfpid=79053 00:10:36.676 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:36.676 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # waitforlisten 79053 00:10:36.676 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 79053 ']' 00:10:36.677 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.677 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:36.677 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.677 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:36.677 12:30:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.677 [2024-11-19 12:30:41.852746] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:36.677 [2024-11-19 12:30:41.853171] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:36.935 [2024-11-19 12:30:41.997064] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:36.935 [2024-11-19 12:30:42.033748] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:36.935 [2024-11-19 12:30:42.034040] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:36.935 [2024-11-19 12:30:42.034196] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:36.935 [2024-11-19 12:30:42.034250] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:36.935 [2024-11-19 12:30:42.034345] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:36.935 [2024-11-19 12:30:42.034538] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:36.935 [2024-11-19 12:30:42.034605] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:36.935 [2024-11-19 12:30:42.034865] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:36.935 [2024-11-19 12:30:42.034866] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.935 [2024-11-19 12:30:42.066357] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:36.935 12:30:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:36.935 12:30:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:36.935 12:30:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:36.935 12:30:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:36.935 12:30:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.935 12:30:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:36.935 12:30:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:37.193 [2024-11-19 12:30:42.432816] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:37.451 12:30:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:37.709 12:30:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:37.709 12:30:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:37.967 12:30:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:37.968 12:30:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:38.225 12:30:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:38.225 12:30:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:38.483 12:30:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:38.483 12:30:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:38.742 12:30:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:39.000 12:30:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:39.000 12:30:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:39.258 12:30:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:39.258 12:30:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:39.516 12:30:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:39.516 12:30:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:39.774 12:30:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:40.033 12:30:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:40.033 12:30:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:40.291 12:30:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:40.291 12:30:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:40.549 12:30:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:40.806 [2024-11-19 12:30:45.928078] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:40.806 12:30:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:41.064 12:30:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:41.321 12:30:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid=bae1b18f-cc14-461e-aa63-e888be1a2cc9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:41.578 12:30:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:41.578 12:30:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:41.578 12:30:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:41.579 12:30:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:41.579 12:30:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:41.579 12:30:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:43.506 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:43.506 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:43.506 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:43.506 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:43.506 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:43.506 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:43.506 12:30:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:43.506 [global] 00:10:43.506 thread=1 00:10:43.506 invalidate=1 00:10:43.506 rw=write 00:10:43.506 time_based=1 00:10:43.506 runtime=1 00:10:43.506 ioengine=libaio 00:10:43.506 direct=1 00:10:43.506 bs=4096 00:10:43.506 iodepth=1 00:10:43.506 norandommap=0 00:10:43.506 numjobs=1 00:10:43.506 00:10:43.506 verify_dump=1 00:10:43.506 verify_backlog=512 00:10:43.506 verify_state_save=0 00:10:43.506 do_verify=1 00:10:43.506 verify=crc32c-intel 00:10:43.506 [job0] 00:10:43.506 filename=/dev/nvme0n1 00:10:43.506 [job1] 00:10:43.506 filename=/dev/nvme0n2 00:10:43.506 [job2] 00:10:43.506 filename=/dev/nvme0n3 00:10:43.506 [job3] 00:10:43.506 filename=/dev/nvme0n4 00:10:43.506 Could not set queue depth (nvme0n1) 00:10:43.506 Could not set queue depth (nvme0n2) 00:10:43.506 Could not set queue depth (nvme0n3) 00:10:43.506 Could not set queue depth (nvme0n4) 00:10:43.764 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:43.764 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:43.764 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:43.764 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:43.764 fio-3.35 00:10:43.764 Starting 4 threads 00:10:45.137 00:10:45.137 job0: (groupid=0, jobs=1): err= 0: pid=79231: Tue Nov 19 12:30:50 2024 00:10:45.137 read: IOPS=2912, BW=11.4MiB/s (11.9MB/s)(11.4MiB/1001msec) 00:10:45.137 slat (nsec): min=11030, max=44398, avg=13616.73, stdev=3161.36 00:10:45.137 clat (usec): min=135, max=517, avg=167.80, stdev=15.80 00:10:45.137 lat (usec): min=147, max=532, avg=181.42, stdev=16.21 00:10:45.137 clat percentiles (usec): 00:10:45.137 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:10:45.137 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 169], 00:10:45.137 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 186], 95.00th=[ 194], 00:10:45.137 | 99.00th=[ 206], 99.50th=[ 212], 99.90th=[ 273], 99.95th=[ 437], 00:10:45.137 | 99.99th=[ 519] 00:10:45.137 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:45.137 slat (nsec): min=13662, max=92892, avg=20488.81, stdev=5086.06 00:10:45.137 clat (usec): min=94, max=254, avg=129.80, stdev=13.10 00:10:45.137 lat (usec): min=110, max=347, avg=150.29, stdev=14.25 00:10:45.137 clat percentiles (usec): 00:10:45.137 | 1.00th=[ 106], 5.00th=[ 113], 10.00th=[ 116], 20.00th=[ 120], 00:10:45.137 | 30.00th=[ 123], 40.00th=[ 126], 50.00th=[ 128], 60.00th=[ 131], 00:10:45.137 | 70.00th=[ 135], 80.00th=[ 141], 90.00th=[ 149], 95.00th=[ 155], 00:10:45.137 | 99.00th=[ 165], 99.50th=[ 169], 99.90th=[ 178], 99.95th=[ 200], 00:10:45.137 | 99.99th=[ 255] 00:10:45.137 bw ( KiB/s): min=12288, max=12288, per=25.03%, avg=12288.00, stdev= 0.00, samples=1 00:10:45.137 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:45.137 lat (usec) : 100=0.08%, 250=99.85%, 500=0.05%, 750=0.02% 00:10:45.137 cpu : usr=1.80%, sys=8.40%, ctx=5987, majf=0, minf=9 00:10:45.137 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:45.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.137 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.137 issued rwts: total=2915,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.138 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:45.138 job1: (groupid=0, jobs=1): err= 0: pid=79232: Tue Nov 19 12:30:50 2024 00:10:45.138 read: IOPS=2821, BW=11.0MiB/s (11.6MB/s)(11.0MiB/1001msec) 00:10:45.138 slat (nsec): min=11661, max=41466, avg=14399.30, stdev=3021.43 00:10:45.138 clat (usec): min=137, max=565, avg=171.51, stdev=17.58 00:10:45.138 lat (usec): min=153, max=590, avg=185.91, stdev=17.78 00:10:45.138 clat percentiles (usec): 00:10:45.138 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:10:45.138 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 174], 00:10:45.138 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 190], 95.00th=[ 196], 00:10:45.138 | 99.00th=[ 212], 99.50th=[ 219], 99.90th=[ 343], 99.95th=[ 562], 00:10:45.138 | 99.99th=[ 570] 00:10:45.138 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:45.138 slat (nsec): min=14787, max=89361, avg=20854.50, stdev=4335.42 00:10:45.138 clat (usec): min=99, max=222, avg=130.71, stdev=13.19 00:10:45.138 lat (usec): min=117, max=311, avg=151.56, stdev=13.80 00:10:45.138 clat percentiles (usec): 00:10:45.138 | 1.00th=[ 109], 5.00th=[ 115], 10.00th=[ 117], 20.00th=[ 121], 00:10:45.138 | 30.00th=[ 123], 40.00th=[ 126], 50.00th=[ 129], 60.00th=[ 133], 00:10:45.138 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 149], 95.00th=[ 157], 00:10:45.138 | 99.00th=[ 169], 99.50th=[ 174], 99.90th=[ 188], 99.95th=[ 204], 00:10:45.138 | 99.99th=[ 223] 00:10:45.138 bw ( KiB/s): min=12288, max=12288, per=25.03%, avg=12288.00, stdev= 0.00, samples=1 00:10:45.138 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:45.138 lat (usec) : 100=0.02%, 250=99.88%, 500=0.07%, 750=0.03% 00:10:45.138 cpu : usr=2.00%, sys=8.40%, ctx=5896, majf=0, minf=7 00:10:45.138 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:45.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.138 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.138 issued rwts: total=2824,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.138 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:45.138 job2: (groupid=0, jobs=1): err= 0: pid=79233: Tue Nov 19 12:30:50 2024 00:10:45.138 read: IOPS=2579, BW=10.1MiB/s (10.6MB/s)(10.1MiB/1001msec) 00:10:45.138 slat (nsec): min=11599, max=39968, avg=14212.30, stdev=2684.82 00:10:45.138 clat (usec): min=145, max=758, avg=179.23, stdev=22.40 00:10:45.138 lat (usec): min=158, max=771, avg=193.44, stdev=22.58 00:10:45.138 clat percentiles (usec): 00:10:45.138 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 165], 00:10:45.138 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 180], 00:10:45.138 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 198], 95.00th=[ 206], 00:10:45.138 | 99.00th=[ 227], 99.50th=[ 273], 99.90th=[ 482], 99.95th=[ 562], 00:10:45.138 | 99.99th=[ 758] 00:10:45.138 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:45.138 slat (nsec): min=14558, max=91039, avg=21044.62, stdev=5156.44 00:10:45.138 clat (usec): min=107, max=409, avg=139.06, stdev=14.60 00:10:45.138 lat (usec): min=126, max=431, avg=160.11, stdev=15.23 00:10:45.138 clat percentiles (usec): 00:10:45.138 | 1.00th=[ 117], 5.00th=[ 122], 10.00th=[ 125], 20.00th=[ 129], 00:10:45.138 | 30.00th=[ 131], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 141], 00:10:45.138 | 70.00th=[ 145], 80.00th=[ 149], 90.00th=[ 157], 95.00th=[ 163], 00:10:45.138 | 99.00th=[ 178], 99.50th=[ 180], 99.90th=[ 208], 99.95th=[ 392], 00:10:45.138 | 99.99th=[ 412] 00:10:45.138 bw ( KiB/s): min=12288, max=12288, per=25.03%, avg=12288.00, stdev= 0.00, samples=1 00:10:45.138 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:45.138 lat (usec) : 250=99.66%, 500=0.30%, 750=0.02%, 1000=0.02% 00:10:45.138 cpu : usr=2.10%, sys=7.90%, ctx=5656, majf=0, minf=11 00:10:45.138 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:45.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.138 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.138 issued rwts: total=2582,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.138 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:45.138 job3: (groupid=0, jobs=1): err= 0: pid=79234: Tue Nov 19 12:30:50 2024 00:10:45.138 read: IOPS=2568, BW=10.0MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:45.138 slat (nsec): min=11492, max=74649, avg=13658.75, stdev=2887.98 00:10:45.138 clat (usec): min=146, max=803, avg=179.15, stdev=20.41 00:10:45.138 lat (usec): min=159, max=816, avg=192.81, stdev=20.58 00:10:45.138 clat percentiles (usec): 00:10:45.138 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 167], 00:10:45.138 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 180], 00:10:45.138 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 200], 95.00th=[ 206], 00:10:45.138 | 99.00th=[ 221], 99.50th=[ 231], 99.90th=[ 355], 99.95th=[ 510], 00:10:45.138 | 99.99th=[ 807] 00:10:45.138 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:45.138 slat (nsec): min=13991, max=90683, avg=20206.44, stdev=4737.89 00:10:45.138 clat (usec): min=106, max=257, avg=141.19, stdev=14.19 00:10:45.138 lat (usec): min=125, max=348, avg=161.40, stdev=14.83 00:10:45.138 clat percentiles (usec): 00:10:45.138 | 1.00th=[ 117], 5.00th=[ 123], 10.00th=[ 126], 20.00th=[ 130], 00:10:45.138 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 143], 00:10:45.138 | 70.00th=[ 147], 80.00th=[ 153], 90.00th=[ 161], 95.00th=[ 167], 00:10:45.138 | 99.00th=[ 180], 99.50th=[ 186], 99.90th=[ 206], 99.95th=[ 219], 00:10:45.138 | 99.99th=[ 258] 00:10:45.138 bw ( KiB/s): min=12288, max=12288, per=25.03%, avg=12288.00, stdev= 0.00, samples=1 00:10:45.138 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:45.138 lat (usec) : 250=99.93%, 500=0.04%, 750=0.02%, 1000=0.02% 00:10:45.138 cpu : usr=1.90%, sys=7.70%, ctx=5643, majf=0, minf=9 00:10:45.138 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:45.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.138 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.138 issued rwts: total=2571,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.138 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:45.138 00:10:45.138 Run status group 0 (all jobs): 00:10:45.138 READ: bw=42.5MiB/s (44.6MB/s), 10.0MiB/s-11.4MiB/s (10.5MB/s-11.9MB/s), io=42.5MiB (44.6MB), run=1001-1001msec 00:10:45.138 WRITE: bw=48.0MiB/s (50.3MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=48.0MiB (50.3MB), run=1001-1001msec 00:10:45.138 00:10:45.138 Disk stats (read/write): 00:10:45.138 nvme0n1: ios=2558/2560, merge=0/0, ticks=454/357, in_queue=811, util=87.45% 00:10:45.138 nvme0n2: ios=2440/2560, merge=0/0, ticks=419/352, in_queue=771, util=87.00% 00:10:45.138 nvme0n3: ios=2229/2560, merge=0/0, ticks=418/375, in_queue=793, util=89.01% 00:10:45.138 nvme0n4: ios=2222/2560, merge=0/0, ticks=412/382, in_queue=794, util=89.57% 00:10:45.138 12:30:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:45.138 [global] 00:10:45.138 thread=1 00:10:45.138 invalidate=1 00:10:45.138 rw=randwrite 00:10:45.138 time_based=1 00:10:45.138 runtime=1 00:10:45.138 ioengine=libaio 00:10:45.138 direct=1 00:10:45.138 bs=4096 00:10:45.138 iodepth=1 00:10:45.138 norandommap=0 00:10:45.138 numjobs=1 00:10:45.138 00:10:45.138 verify_dump=1 00:10:45.138 verify_backlog=512 00:10:45.138 verify_state_save=0 00:10:45.138 do_verify=1 00:10:45.138 verify=crc32c-intel 00:10:45.138 [job0] 00:10:45.138 filename=/dev/nvme0n1 00:10:45.138 [job1] 00:10:45.138 filename=/dev/nvme0n2 00:10:45.138 [job2] 00:10:45.138 filename=/dev/nvme0n3 00:10:45.138 [job3] 00:10:45.138 filename=/dev/nvme0n4 00:10:45.138 Could not set queue depth (nvme0n1) 00:10:45.138 Could not set queue depth (nvme0n2) 00:10:45.138 Could not set queue depth (nvme0n3) 00:10:45.138 Could not set queue depth (nvme0n4) 00:10:45.138 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:45.138 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:45.138 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:45.138 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:45.138 fio-3.35 00:10:45.138 Starting 4 threads 00:10:46.511 00:10:46.512 job0: (groupid=0, jobs=1): err= 0: pid=79293: Tue Nov 19 12:30:51 2024 00:10:46.512 read: IOPS=1828, BW=7313KiB/s (7488kB/s)(7320KiB/1001msec) 00:10:46.512 slat (usec): min=6, max=148, avg=14.01, stdev=10.29 00:10:46.512 clat (usec): min=142, max=685, avg=284.78, stdev=66.62 00:10:46.512 lat (usec): min=157, max=694, avg=298.79, stdev=69.42 00:10:46.512 clat percentiles (usec): 00:10:46.512 | 1.00th=[ 163], 5.00th=[ 210], 10.00th=[ 221], 20.00th=[ 231], 00:10:46.512 | 30.00th=[ 243], 40.00th=[ 253], 50.00th=[ 265], 60.00th=[ 285], 00:10:46.512 | 70.00th=[ 318], 80.00th=[ 338], 90.00th=[ 371], 95.00th=[ 408], 00:10:46.512 | 99.00th=[ 490], 99.50th=[ 523], 99.90th=[ 603], 99.95th=[ 685], 00:10:46.512 | 99.99th=[ 685] 00:10:46.512 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:46.512 slat (usec): min=8, max=111, avg=20.42, stdev= 9.21 00:10:46.512 clat (usec): min=100, max=851, avg=197.89, stdev=63.73 00:10:46.512 lat (usec): min=122, max=865, avg=218.31, stdev=63.63 00:10:46.512 clat percentiles (usec): 00:10:46.512 | 1.00th=[ 109], 5.00th=[ 117], 10.00th=[ 122], 20.00th=[ 133], 00:10:46.512 | 30.00th=[ 155], 40.00th=[ 180], 50.00th=[ 194], 60.00th=[ 206], 00:10:46.512 | 70.00th=[ 225], 80.00th=[ 249], 90.00th=[ 293], 95.00th=[ 314], 00:10:46.512 | 99.00th=[ 351], 99.50th=[ 355], 99.90th=[ 379], 99.95th=[ 445], 00:10:46.512 | 99.99th=[ 848] 00:10:46.512 bw ( KiB/s): min= 9736, max= 9736, per=29.74%, avg=9736.00, stdev= 0.00, samples=1 00:10:46.512 iops : min= 2434, max= 2434, avg=2434.00, stdev= 0.00, samples=1 00:10:46.512 lat (usec) : 250=60.31%, 500=39.30%, 750=0.36%, 1000=0.03% 00:10:46.512 cpu : usr=1.30%, sys=5.40%, ctx=4118, majf=0, minf=7 00:10:46.512 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:46.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.512 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.512 issued rwts: total=1830,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.512 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:46.512 job1: (groupid=0, jobs=1): err= 0: pid=79294: Tue Nov 19 12:30:51 2024 00:10:46.512 read: IOPS=1580, BW=6322KiB/s (6473kB/s)(6328KiB/1001msec) 00:10:46.512 slat (usec): min=6, max=434, avg=13.44, stdev=12.05 00:10:46.512 clat (usec): min=96, max=743, avg=281.90, stdev=62.25 00:10:46.512 lat (usec): min=171, max=997, avg=295.34, stdev=63.78 00:10:46.512 clat percentiles (usec): 00:10:46.512 | 1.00th=[ 206], 5.00th=[ 217], 10.00th=[ 223], 20.00th=[ 233], 00:10:46.512 | 30.00th=[ 241], 40.00th=[ 249], 50.00th=[ 262], 60.00th=[ 277], 00:10:46.512 | 70.00th=[ 306], 80.00th=[ 338], 90.00th=[ 371], 95.00th=[ 392], 00:10:46.512 | 99.00th=[ 445], 99.50th=[ 545], 99.90th=[ 660], 99.95th=[ 742], 00:10:46.512 | 99.99th=[ 742] 00:10:46.512 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:46.512 slat (usec): min=3, max=163, avg=22.49, stdev=15.39 00:10:46.512 clat (usec): min=96, max=3958, avg=234.63, stdev=107.04 00:10:46.512 lat (usec): min=173, max=3981, avg=257.13, stdev=108.61 00:10:46.512 clat percentiles (usec): 00:10:46.512 | 1.00th=[ 163], 5.00th=[ 176], 10.00th=[ 184], 20.00th=[ 192], 00:10:46.512 | 30.00th=[ 200], 40.00th=[ 206], 50.00th=[ 212], 60.00th=[ 223], 00:10:46.512 | 70.00th=[ 237], 80.00th=[ 269], 90.00th=[ 314], 95.00th=[ 347], 00:10:46.512 | 99.00th=[ 457], 99.50th=[ 506], 99.90th=[ 971], 99.95th=[ 1631], 00:10:46.512 | 99.99th=[ 3949] 00:10:46.512 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:10:46.512 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:46.512 lat (usec) : 100=0.14%, 250=59.86%, 500=39.39%, 750=0.52%, 1000=0.03% 00:10:46.512 lat (msec) : 2=0.03%, 4=0.03% 00:10:46.512 cpu : usr=1.50%, sys=5.30%, ctx=3896, majf=0, minf=17 00:10:46.512 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:46.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.512 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.512 issued rwts: total=1582,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.512 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:46.512 job2: (groupid=0, jobs=1): err= 0: pid=79295: Tue Nov 19 12:30:51 2024 00:10:46.512 read: IOPS=1654, BW=6617KiB/s (6776kB/s)(6624KiB/1001msec) 00:10:46.512 slat (usec): min=6, max=155, avg=14.89, stdev= 8.59 00:10:46.512 clat (usec): min=117, max=642, avg=281.82, stdev=61.76 00:10:46.512 lat (usec): min=183, max=653, avg=296.71, stdev=61.53 00:10:46.512 clat percentiles (usec): 00:10:46.512 | 1.00th=[ 202], 5.00th=[ 212], 10.00th=[ 219], 20.00th=[ 229], 00:10:46.512 | 30.00th=[ 239], 40.00th=[ 249], 50.00th=[ 262], 60.00th=[ 281], 00:10:46.512 | 70.00th=[ 314], 80.00th=[ 338], 90.00th=[ 371], 95.00th=[ 396], 00:10:46.512 | 99.00th=[ 437], 99.50th=[ 469], 99.90th=[ 635], 99.95th=[ 644], 00:10:46.512 | 99.99th=[ 644] 00:10:46.512 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:46.512 slat (usec): min=3, max=189, avg=23.99, stdev=17.49 00:10:46.512 clat (usec): min=96, max=7286, avg=221.23, stdev=242.29 00:10:46.512 lat (usec): min=136, max=7308, avg=245.22, stdev=243.46 00:10:46.512 clat percentiles (usec): 00:10:46.512 | 1.00th=[ 119], 5.00th=[ 127], 10.00th=[ 135], 20.00th=[ 149], 00:10:46.512 | 30.00th=[ 169], 40.00th=[ 186], 50.00th=[ 200], 60.00th=[ 215], 00:10:46.512 | 70.00th=[ 231], 80.00th=[ 265], 90.00th=[ 306], 95.00th=[ 330], 00:10:46.512 | 99.00th=[ 449], 99.50th=[ 502], 99.90th=[ 3916], 99.95th=[ 5735], 00:10:46.512 | 99.99th=[ 7308] 00:10:46.512 bw ( KiB/s): min= 8272, max= 8272, per=25.27%, avg=8272.00, stdev= 0.00, samples=1 00:10:46.512 iops : min= 2068, max= 2068, avg=2068.00, stdev= 0.00, samples=1 00:10:46.512 lat (usec) : 100=0.03%, 250=60.96%, 500=38.58%, 750=0.24%, 1000=0.03% 00:10:46.512 lat (msec) : 2=0.03%, 4=0.08%, 10=0.05% 00:10:46.512 cpu : usr=1.10%, sys=6.20%, ctx=4003, majf=0, minf=13 00:10:46.512 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:46.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.512 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.512 issued rwts: total=1656,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.512 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:46.512 job3: (groupid=0, jobs=1): err= 0: pid=79296: Tue Nov 19 12:30:51 2024 00:10:46.512 read: IOPS=1590, BW=6362KiB/s (6514kB/s)(6368KiB/1001msec) 00:10:46.512 slat (nsec): min=4352, max=92196, avg=10204.59, stdev=4951.06 00:10:46.512 clat (usec): min=165, max=687, avg=285.70, stdev=59.48 00:10:46.512 lat (usec): min=195, max=696, avg=295.90, stdev=59.98 00:10:46.512 clat percentiles (usec): 00:10:46.512 | 1.00th=[ 212], 5.00th=[ 221], 10.00th=[ 229], 20.00th=[ 237], 00:10:46.512 | 30.00th=[ 247], 40.00th=[ 255], 50.00th=[ 265], 60.00th=[ 281], 00:10:46.512 | 70.00th=[ 310], 80.00th=[ 338], 90.00th=[ 371], 95.00th=[ 404], 00:10:46.512 | 99.00th=[ 457], 99.50th=[ 494], 99.90th=[ 603], 99.95th=[ 685], 00:10:46.512 | 99.99th=[ 685] 00:10:46.512 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:46.512 slat (usec): min=3, max=148, avg=20.02, stdev=17.68 00:10:46.512 clat (usec): min=35, max=2331, avg=236.00, stdev=86.42 00:10:46.512 lat (usec): min=157, max=2364, avg=256.02, stdev=89.57 00:10:46.512 clat percentiles (usec): 00:10:46.512 | 1.00th=[ 163], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 196], 00:10:46.512 | 30.00th=[ 204], 40.00th=[ 210], 50.00th=[ 219], 60.00th=[ 229], 00:10:46.512 | 70.00th=[ 241], 80.00th=[ 269], 90.00th=[ 310], 95.00th=[ 343], 00:10:46.512 | 99.00th=[ 437], 99.50th=[ 506], 99.90th=[ 1385], 99.95th=[ 2073], 00:10:46.512 | 99.99th=[ 2343] 00:10:46.512 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:10:46.512 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:46.512 lat (usec) : 50=0.03%, 100=0.03%, 250=57.88%, 500=41.57%, 750=0.41% 00:10:46.512 lat (msec) : 2=0.03%, 4=0.05% 00:10:46.512 cpu : usr=1.10%, sys=4.20%, ctx=3901, majf=0, minf=7 00:10:46.512 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:46.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.512 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.512 issued rwts: total=1592,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.512 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:46.512 00:10:46.512 Run status group 0 (all jobs): 00:10:46.512 READ: bw=26.0MiB/s (27.3MB/s), 6322KiB/s-7313KiB/s (6473kB/s-7488kB/s), io=26.0MiB (27.3MB), run=1001-1001msec 00:10:46.512 WRITE: bw=32.0MiB/s (33.5MB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=32.0MiB (33.6MB), run=1001-1001msec 00:10:46.512 00:10:46.512 Disk stats (read/write): 00:10:46.512 nvme0n1: ios=1586/1996, merge=0/0, ticks=413/388, in_queue=801, util=89.18% 00:10:46.512 nvme0n2: ios=1585/1729, merge=0/0, ticks=457/381, in_queue=838, util=88.80% 00:10:46.512 nvme0n3: ios=1536/1804, merge=0/0, ticks=415/366, in_queue=781, util=88.32% 00:10:46.512 nvme0n4: ios=1536/1734, merge=0/0, ticks=420/358, in_queue=778, util=89.60% 00:10:46.512 12:30:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:46.512 [global] 00:10:46.512 thread=1 00:10:46.512 invalidate=1 00:10:46.512 rw=write 00:10:46.512 time_based=1 00:10:46.512 runtime=1 00:10:46.512 ioengine=libaio 00:10:46.512 direct=1 00:10:46.512 bs=4096 00:10:46.512 iodepth=128 00:10:46.512 norandommap=0 00:10:46.512 numjobs=1 00:10:46.512 00:10:46.512 verify_dump=1 00:10:46.512 verify_backlog=512 00:10:46.512 verify_state_save=0 00:10:46.512 do_verify=1 00:10:46.512 verify=crc32c-intel 00:10:46.512 [job0] 00:10:46.512 filename=/dev/nvme0n1 00:10:46.512 [job1] 00:10:46.512 filename=/dev/nvme0n2 00:10:46.512 [job2] 00:10:46.512 filename=/dev/nvme0n3 00:10:46.512 [job3] 00:10:46.512 filename=/dev/nvme0n4 00:10:46.512 Could not set queue depth (nvme0n1) 00:10:46.512 Could not set queue depth (nvme0n2) 00:10:46.512 Could not set queue depth (nvme0n3) 00:10:46.512 Could not set queue depth (nvme0n4) 00:10:46.513 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:46.513 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:46.513 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:46.513 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:46.513 fio-3.35 00:10:46.513 Starting 4 threads 00:10:47.887 00:10:47.887 job0: (groupid=0, jobs=1): err= 0: pid=79354: Tue Nov 19 12:30:52 2024 00:10:47.887 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:10:47.887 slat (usec): min=5, max=3725, avg=100.21, stdev=398.45 00:10:47.887 clat (usec): min=10081, max=17377, avg=13538.88, stdev=872.24 00:10:47.887 lat (usec): min=10100, max=17425, avg=13639.10, stdev=929.42 00:10:47.887 clat percentiles (usec): 00:10:47.887 | 1.00th=[11207], 5.00th=[12125], 10.00th=[12780], 20.00th=[13042], 00:10:47.887 | 30.00th=[13173], 40.00th=[13304], 50.00th=[13435], 60.00th=[13566], 00:10:47.887 | 70.00th=[13698], 80.00th=[14091], 90.00th=[14746], 95.00th=[15401], 00:10:47.887 | 99.00th=[15926], 99.50th=[16057], 99.90th=[16450], 99.95th=[16581], 00:10:47.887 | 99.99th=[17433] 00:10:47.887 write: IOPS=4973, BW=19.4MiB/s (20.4MB/s)(19.5MiB/1003msec); 0 zone resets 00:10:47.887 slat (usec): min=11, max=5440, avg=99.49, stdev=488.01 00:10:47.887 clat (usec): min=2701, max=18794, avg=12901.20, stdev=1408.74 00:10:47.887 lat (usec): min=2721, max=18875, avg=13000.69, stdev=1483.92 00:10:47.887 clat percentiles (usec): 00:10:47.887 | 1.00th=[ 7046], 5.00th=[11731], 10.00th=[12125], 20.00th=[12387], 00:10:47.887 | 30.00th=[12649], 40.00th=[12780], 50.00th=[12911], 60.00th=[13042], 00:10:47.887 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13566], 95.00th=[15401], 00:10:47.887 | 99.00th=[16319], 99.50th=[16909], 99.90th=[17957], 99.95th=[17957], 00:10:47.887 | 99.99th=[18744] 00:10:47.887 bw ( KiB/s): min=18408, max=20480, per=26.05%, avg=19444.00, stdev=1465.13, samples=2 00:10:47.887 iops : min= 4602, max= 5120, avg=4861.00, stdev=366.28, samples=2 00:10:47.887 lat (msec) : 4=0.44%, 10=0.50%, 20=99.06% 00:10:47.887 cpu : usr=5.89%, sys=12.48%, ctx=346, majf=0, minf=1 00:10:47.887 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:47.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:47.887 issued rwts: total=4608,4988,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.887 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:47.887 job1: (groupid=0, jobs=1): err= 0: pid=79355: Tue Nov 19 12:30:52 2024 00:10:47.887 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:10:47.887 slat (usec): min=3, max=3351, avg=100.41, stdev=476.65 00:10:47.887 clat (usec): min=9896, max=14834, avg=13520.02, stdev=628.19 00:10:47.887 lat (usec): min=12499, max=14844, avg=13620.44, stdev=415.48 00:10:47.887 clat percentiles (usec): 00:10:47.887 | 1.00th=[10683], 5.00th=[12780], 10.00th=[12911], 20.00th=[13173], 00:10:47.887 | 30.00th=[13304], 40.00th=[13566], 50.00th=[13566], 60.00th=[13698], 00:10:47.887 | 70.00th=[13829], 80.00th=[13960], 90.00th=[14091], 95.00th=[14222], 00:10:47.887 | 99.00th=[14484], 99.50th=[14746], 99.90th=[14877], 99.95th=[14877], 00:10:47.887 | 99.99th=[14877] 00:10:47.887 write: IOPS=5014, BW=19.6MiB/s (20.5MB/s)(19.6MiB/1002msec); 0 zone resets 00:10:47.887 slat (usec): min=10, max=3590, avg=98.91, stdev=421.82 00:10:47.887 clat (usec): min=338, max=14635, avg=12776.90, stdev=1140.66 00:10:47.887 lat (usec): min=2866, max=14669, avg=12875.81, stdev=1061.32 00:10:47.887 clat percentiles (usec): 00:10:47.887 | 1.00th=[ 6259], 5.00th=[11731], 10.00th=[12387], 20.00th=[12649], 00:10:47.887 | 30.00th=[12780], 40.00th=[12780], 50.00th=[12911], 60.00th=[13042], 00:10:47.887 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13435], 95.00th=[13566], 00:10:47.887 | 99.00th=[14353], 99.50th=[14484], 99.90th=[14615], 99.95th=[14615], 00:10:47.887 | 99.99th=[14615] 00:10:47.887 bw ( KiB/s): min=20480, max=20480, per=27.44%, avg=20480.00, stdev= 0.00, samples=1 00:10:47.887 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:10:47.887 lat (usec) : 500=0.01% 00:10:47.887 lat (msec) : 4=0.33%, 10=0.71%, 20=98.95% 00:10:47.887 cpu : usr=5.19%, sys=13.39%, ctx=303, majf=0, minf=4 00:10:47.887 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:47.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:47.887 issued rwts: total=4608,5025,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.887 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:47.887 job2: (groupid=0, jobs=1): err= 0: pid=79356: Tue Nov 19 12:30:52 2024 00:10:47.887 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:10:47.887 slat (usec): min=5, max=13865, avg=116.65, stdev=716.05 00:10:47.887 clat (usec): min=5043, max=29188, avg=15868.06, stdev=2671.66 00:10:47.887 lat (usec): min=5068, max=29211, avg=15984.71, stdev=2685.97 00:10:47.887 clat percentiles (usec): 00:10:47.887 | 1.00th=[ 9765], 5.00th=[11600], 10.00th=[14353], 20.00th=[15008], 00:10:47.887 | 30.00th=[15270], 40.00th=[15401], 50.00th=[15533], 60.00th=[15795], 00:10:47.887 | 70.00th=[15926], 80.00th=[16188], 90.00th=[16712], 95.00th=[22938], 00:10:47.887 | 99.00th=[26346], 99.50th=[27919], 99.90th=[28443], 99.95th=[28443], 00:10:47.887 | 99.99th=[29230] 00:10:47.887 write: IOPS=4405, BW=17.2MiB/s (18.0MB/s)(17.2MiB/1002msec); 0 zone resets 00:10:47.887 slat (usec): min=5, max=10113, avg=110.35, stdev=654.40 00:10:47.887 clat (usec): min=809, max=28201, avg=14050.74, stdev=2381.29 00:10:47.887 lat (usec): min=2543, max=28215, avg=14161.09, stdev=2319.05 00:10:47.887 clat percentiles (usec): 00:10:47.887 | 1.00th=[ 4178], 5.00th=[ 9241], 10.00th=[12125], 20.00th=[13173], 00:10:47.887 | 30.00th=[13829], 40.00th=[14222], 50.00th=[14615], 60.00th=[14746], 00:10:47.887 | 70.00th=[15139], 80.00th=[15270], 90.00th=[15664], 95.00th=[16057], 00:10:47.887 | 99.00th=[19268], 99.50th=[19268], 99.90th=[20841], 99.95th=[20841], 00:10:47.887 | 99.99th=[28181] 00:10:47.887 bw ( KiB/s): min=16918, max=17344, per=22.95%, avg=17131.00, stdev=301.23, samples=2 00:10:47.887 iops : min= 4229, max= 4336, avg=4282.50, stdev=75.66, samples=2 00:10:47.887 lat (usec) : 1000=0.01% 00:10:47.887 lat (msec) : 4=0.41%, 10=3.61%, 20=92.54%, 50=3.43% 00:10:47.887 cpu : usr=3.00%, sys=13.39%, ctx=241, majf=0, minf=3 00:10:47.887 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:47.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:47.887 issued rwts: total=4096,4414,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.887 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:47.887 job3: (groupid=0, jobs=1): err= 0: pid=79357: Tue Nov 19 12:30:52 2024 00:10:47.887 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:10:47.887 slat (usec): min=9, max=3654, avg=115.24, stdev=554.02 00:10:47.887 clat (usec): min=11509, max=17026, avg=15430.30, stdev=680.58 00:10:47.887 lat (usec): min=14371, max=17045, avg=15545.55, stdev=400.53 00:10:47.887 clat percentiles (usec): 00:10:47.887 | 1.00th=[12125], 5.00th=[14877], 10.00th=[15008], 20.00th=[15139], 00:10:47.887 | 30.00th=[15270], 40.00th=[15401], 50.00th=[15533], 60.00th=[15533], 00:10:47.887 | 70.00th=[15664], 80.00th=[15795], 90.00th=[16057], 95.00th=[16188], 00:10:47.887 | 99.00th=[16909], 99.50th=[16909], 99.90th=[16909], 99.95th=[16909], 00:10:47.887 | 99.99th=[16909] 00:10:47.887 write: IOPS=4276, BW=16.7MiB/s (17.5MB/s)(16.8MiB/1003msec); 0 zone resets 00:10:47.887 slat (usec): min=13, max=3852, avg=114.63, stdev=500.57 00:10:47.887 clat (usec): min=149, max=16018, avg=14762.78, stdev=1373.22 00:10:47.887 lat (usec): min=3202, max=17043, avg=14877.41, stdev=1278.25 00:10:47.887 clat percentiles (usec): 00:10:47.887 | 1.00th=[ 7308], 5.00th=[12649], 10.00th=[14353], 20.00th=[14615], 00:10:47.887 | 30.00th=[14746], 40.00th=[14877], 50.00th=[15008], 60.00th=[15139], 00:10:47.887 | 70.00th=[15270], 80.00th=[15401], 90.00th=[15533], 95.00th=[15664], 00:10:47.887 | 99.00th=[15926], 99.50th=[15926], 99.90th=[16057], 99.95th=[16057], 00:10:47.887 | 99.99th=[16057] 00:10:47.887 bw ( KiB/s): min=16384, max=16904, per=22.30%, avg=16644.00, stdev=367.70, samples=2 00:10:47.887 iops : min= 4096, max= 4226, avg=4161.00, stdev=91.92, samples=2 00:10:47.887 lat (usec) : 250=0.01% 00:10:47.887 lat (msec) : 4=0.38%, 10=0.38%, 20=99.22% 00:10:47.887 cpu : usr=4.09%, sys=12.97%, ctx=264, majf=0, minf=8 00:10:47.887 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:47.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:47.887 issued rwts: total=4096,4289,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.887 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:47.887 00:10:47.887 Run status group 0 (all jobs): 00:10:47.887 READ: bw=67.8MiB/s (71.1MB/s), 16.0MiB/s-18.0MiB/s (16.7MB/s-18.8MB/s), io=68.0MiB (71.3MB), run=1002-1003msec 00:10:47.887 WRITE: bw=72.9MiB/s (76.4MB/s), 16.7MiB/s-19.6MiB/s (17.5MB/s-20.5MB/s), io=73.1MiB (76.7MB), run=1002-1003msec 00:10:47.887 00:10:47.887 Disk stats (read/write): 00:10:47.887 nvme0n1: ios=4146/4186, merge=0/0, ticks=17400/14982, in_queue=32382, util=89.48% 00:10:47.887 nvme0n2: ios=4144/4224, merge=0/0, ticks=12366/11688, in_queue=24054, util=88.89% 00:10:47.887 nvme0n3: ios=3590/3711, merge=0/0, ticks=54325/48391, in_queue=102716, util=89.43% 00:10:47.887 nvme0n4: ios=3584/3680, merge=0/0, ticks=12521/11932, in_queue=24453, util=89.78% 00:10:47.888 12:30:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:47.888 [global] 00:10:47.888 thread=1 00:10:47.888 invalidate=1 00:10:47.888 rw=randwrite 00:10:47.888 time_based=1 00:10:47.888 runtime=1 00:10:47.888 ioengine=libaio 00:10:47.888 direct=1 00:10:47.888 bs=4096 00:10:47.888 iodepth=128 00:10:47.888 norandommap=0 00:10:47.888 numjobs=1 00:10:47.888 00:10:47.888 verify_dump=1 00:10:47.888 verify_backlog=512 00:10:47.888 verify_state_save=0 00:10:47.888 do_verify=1 00:10:47.888 verify=crc32c-intel 00:10:47.888 [job0] 00:10:47.888 filename=/dev/nvme0n1 00:10:47.888 [job1] 00:10:47.888 filename=/dev/nvme0n2 00:10:47.888 [job2] 00:10:47.888 filename=/dev/nvme0n3 00:10:47.888 [job3] 00:10:47.888 filename=/dev/nvme0n4 00:10:47.888 Could not set queue depth (nvme0n1) 00:10:47.888 Could not set queue depth (nvme0n2) 00:10:47.888 Could not set queue depth (nvme0n3) 00:10:47.888 Could not set queue depth (nvme0n4) 00:10:47.888 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:47.888 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:47.888 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:47.888 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:47.888 fio-3.35 00:10:47.888 Starting 4 threads 00:10:49.262 00:10:49.262 job0: (groupid=0, jobs=1): err= 0: pid=79412: Tue Nov 19 12:30:54 2024 00:10:49.262 read: IOPS=5532, BW=21.6MiB/s (22.7MB/s)(21.7MiB/1006msec) 00:10:49.262 slat (usec): min=9, max=5946, avg=84.63, stdev=520.66 00:10:49.262 clat (usec): min=4848, max=18974, avg=11899.52, stdev=1445.49 00:10:49.262 lat (usec): min=4860, max=22542, avg=11984.16, stdev=1467.48 00:10:49.262 clat percentiles (usec): 00:10:49.262 | 1.00th=[ 6783], 5.00th=[10159], 10.00th=[11207], 20.00th=[11469], 00:10:49.262 | 30.00th=[11731], 40.00th=[11863], 50.00th=[11994], 60.00th=[12125], 00:10:49.262 | 70.00th=[12256], 80.00th=[12518], 90.00th=[12780], 95.00th=[13042], 00:10:49.262 | 99.00th=[18220], 99.50th=[18482], 99.90th=[19006], 99.95th=[19006], 00:10:49.262 | 99.99th=[19006] 00:10:49.262 write: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec); 0 zone resets 00:10:49.262 slat (usec): min=10, max=7829, avg=86.32, stdev=499.43 00:10:49.262 clat (usec): min=5808, max=15427, avg=10862.37, stdev=969.97 00:10:49.262 lat (usec): min=7875, max=15451, avg=10948.70, stdev=861.47 00:10:49.262 clat percentiles (usec): 00:10:49.262 | 1.00th=[ 7242], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10421], 00:10:49.262 | 30.00th=[10552], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:10:49.262 | 70.00th=[11207], 80.00th=[11338], 90.00th=[11600], 95.00th=[11863], 00:10:49.262 | 99.00th=[15401], 99.50th=[15401], 99.90th=[15401], 99.95th=[15401], 00:10:49.262 | 99.99th=[15401] 00:10:49.262 bw ( KiB/s): min=22008, max=23048, per=35.07%, avg=22528.00, stdev=735.39, samples=2 00:10:49.262 iops : min= 5502, max= 5762, avg=5632.00, stdev=183.85, samples=2 00:10:49.262 lat (msec) : 10=8.05%, 20=91.95% 00:10:49.262 cpu : usr=4.38%, sys=14.93%, ctx=239, majf=0, minf=7 00:10:49.262 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:49.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.262 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:49.262 issued rwts: total=5566,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.262 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:49.262 job1: (groupid=0, jobs=1): err= 0: pid=79413: Tue Nov 19 12:30:54 2024 00:10:49.262 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:10:49.262 slat (usec): min=5, max=11953, avg=198.43, stdev=1087.22 00:10:49.262 clat (usec): min=15156, max=40095, avg=26608.70, stdev=4142.62 00:10:49.262 lat (usec): min=15184, max=48614, avg=26807.13, stdev=4168.93 00:10:49.262 clat percentiles (usec): 00:10:49.262 | 1.00th=[15401], 5.00th=[20841], 10.00th=[23200], 20.00th=[24773], 00:10:49.262 | 30.00th=[25297], 40.00th=[25560], 50.00th=[25560], 60.00th=[25822], 00:10:49.262 | 70.00th=[26346], 80.00th=[29754], 90.00th=[32637], 95.00th=[34866], 00:10:49.262 | 99.00th=[38536], 99.50th=[39060], 99.90th=[40109], 99.95th=[40109], 00:10:49.262 | 99.99th=[40109] 00:10:49.262 write: IOPS=2607, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1005msec); 0 zone resets 00:10:49.262 slat (usec): min=6, max=15924, avg=177.74, stdev=1065.81 00:10:49.262 clat (usec): min=3416, max=35915, avg=22622.79, stdev=4225.42 00:10:49.262 lat (usec): min=11506, max=35933, avg=22800.53, stdev=4160.63 00:10:49.262 clat percentiles (usec): 00:10:49.262 | 1.00th=[11863], 5.00th=[13435], 10.00th=[16909], 20.00th=[20579], 00:10:49.262 | 30.00th=[21627], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:10:49.262 | 70.00th=[24249], 80.00th=[24511], 90.00th=[28181], 95.00th=[29754], 00:10:49.262 | 99.00th=[32900], 99.50th=[33424], 99.90th=[33817], 99.95th=[34341], 00:10:49.262 | 99.99th=[35914] 00:10:49.262 bw ( KiB/s): min= 8192, max=12288, per=15.94%, avg=10240.00, stdev=2896.31, samples=2 00:10:49.262 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:10:49.262 lat (msec) : 4=0.02%, 20=10.91%, 50=89.08% 00:10:49.262 cpu : usr=3.19%, sys=6.67%, ctx=328, majf=0, minf=13 00:10:49.262 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:49.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.262 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:49.262 issued rwts: total=2560,2621,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.262 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:49.262 job2: (groupid=0, jobs=1): err= 0: pid=79414: Tue Nov 19 12:30:54 2024 00:10:49.262 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:10:49.262 slat (usec): min=5, max=13658, avg=130.82, stdev=628.38 00:10:49.262 clat (usec): min=10000, max=36578, avg=17433.63, stdev=6237.87 00:10:49.262 lat (usec): min=10019, max=37627, avg=17564.46, stdev=6283.81 00:10:49.262 clat percentiles (usec): 00:10:49.262 | 1.00th=[11469], 5.00th=[12256], 10.00th=[13042], 20.00th=[13304], 00:10:49.262 | 30.00th=[13435], 40.00th=[13566], 50.00th=[13829], 60.00th=[14222], 00:10:49.262 | 70.00th=[18220], 80.00th=[24511], 90.00th=[25822], 95.00th=[31065], 00:10:49.262 | 99.00th=[34866], 99.50th=[34866], 99.90th=[36439], 99.95th=[36439], 00:10:49.262 | 99.99th=[36439] 00:10:49.262 write: IOPS=3993, BW=15.6MiB/s (16.4MB/s)(15.6MiB/1002msec); 0 zone resets 00:10:49.262 slat (usec): min=5, max=12120, avg=125.66, stdev=688.92 00:10:49.262 clat (usec): min=785, max=35883, avg=15715.95, stdev=5245.16 00:10:49.262 lat (usec): min=3702, max=35908, avg=15841.61, stdev=5284.35 00:10:49.262 clat percentiles (usec): 00:10:49.262 | 1.00th=[ 8160], 5.00th=[11076], 10.00th=[11863], 20.00th=[12256], 00:10:49.262 | 30.00th=[12518], 40.00th=[12780], 50.00th=[12911], 60.00th=[13566], 00:10:49.262 | 70.00th=[18220], 80.00th=[19792], 90.00th=[23200], 95.00th=[25297], 00:10:49.262 | 99.00th=[32375], 99.50th=[33162], 99.90th=[35390], 99.95th=[35390], 00:10:49.262 | 99.99th=[35914] 00:10:49.262 bw ( KiB/s): min=20480, max=20480, per=31.88%, avg=20480.00, stdev= 0.00, samples=1 00:10:49.262 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:10:49.262 lat (usec) : 1000=0.01% 00:10:49.262 lat (msec) : 4=0.17%, 10=1.17%, 20=75.61%, 50=23.03% 00:10:49.262 cpu : usr=2.80%, sys=11.39%, ctx=413, majf=0, minf=19 00:10:49.262 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:49.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.262 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:49.262 issued rwts: total=3584,4001,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.262 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:49.262 job3: (groupid=0, jobs=1): err= 0: pid=79415: Tue Nov 19 12:30:54 2024 00:10:49.262 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:10:49.262 slat (usec): min=7, max=13380, avg=131.57, stdev=917.83 00:10:49.262 clat (usec): min=8117, max=39893, avg=18048.06, stdev=6095.05 00:10:49.262 lat (usec): min=8131, max=48329, avg=18179.62, stdev=6141.85 00:10:49.262 clat percentiles (usec): 00:10:49.262 | 1.00th=[ 8717], 5.00th=[12649], 10.00th=[13042], 20.00th=[13304], 00:10:49.262 | 30.00th=[13435], 40.00th=[13698], 50.00th=[14091], 60.00th=[15664], 00:10:49.262 | 70.00th=[25035], 80.00th=[25560], 90.00th=[25822], 95.00th=[26084], 00:10:49.262 | 99.00th=[27919], 99.50th=[35390], 99.90th=[40109], 99.95th=[40109], 00:10:49.262 | 99.99th=[40109] 00:10:49.262 write: IOPS=3883, BW=15.2MiB/s (15.9MB/s)(15.2MiB/1005msec); 0 zone resets 00:10:49.263 slat (usec): min=7, max=15504, avg=128.77, stdev=878.60 00:10:49.263 clat (usec): min=554, max=31547, avg=16057.30, stdev=5613.54 00:10:49.263 lat (usec): min=5200, max=31749, avg=16186.07, stdev=5599.17 00:10:49.263 clat percentiles (usec): 00:10:49.263 | 1.00th=[ 6194], 5.00th=[10945], 10.00th=[11469], 20.00th=[11994], 00:10:49.263 | 30.00th=[12256], 40.00th=[12649], 50.00th=[13042], 60.00th=[13698], 00:10:49.263 | 70.00th=[20841], 80.00th=[22938], 90.00th=[24249], 95.00th=[24511], 00:10:49.263 | 99.00th=[29230], 99.50th=[29230], 99.90th=[31589], 99.95th=[31589], 00:10:49.263 | 99.99th=[31589] 00:10:49.263 bw ( KiB/s): min=12263, max=17912, per=23.48%, avg=15087.50, stdev=3994.45, samples=2 00:10:49.263 iops : min= 3065, max= 4478, avg=3771.50, stdev=999.14, samples=2 00:10:49.263 lat (usec) : 750=0.01% 00:10:49.263 lat (msec) : 10=2.89%, 20=62.62%, 50=34.49% 00:10:49.263 cpu : usr=3.78%, sys=9.86%, ctx=169, majf=0, minf=7 00:10:49.263 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:49.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.263 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:49.263 issued rwts: total=3584,3903,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.263 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:49.263 00:10:49.263 Run status group 0 (all jobs): 00:10:49.263 READ: bw=59.4MiB/s (62.3MB/s), 9.95MiB/s-21.6MiB/s (10.4MB/s-22.7MB/s), io=59.7MiB (62.6MB), run=1002-1006msec 00:10:49.263 WRITE: bw=62.7MiB/s (65.8MB/s), 10.2MiB/s-21.9MiB/s (10.7MB/s-22.9MB/s), io=63.1MiB (66.2MB), run=1002-1006msec 00:10:49.263 00:10:49.263 Disk stats (read/write): 00:10:49.263 nvme0n1: ios=4658/4864, merge=0/0, ticks=51785/48195, in_queue=99980, util=88.16% 00:10:49.263 nvme0n2: ios=2086/2434, merge=0/0, ticks=42538/40881, in_queue=83419, util=87.23% 00:10:49.263 nvme0n3: ios=3169/3584, merge=0/0, ticks=24170/24364, in_queue=48534, util=88.05% 00:10:49.263 nvme0n4: ios=2892/3072, merge=0/0, ticks=52723/49758, in_queue=102481, util=89.74% 00:10:49.263 12:30:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:49.263 12:30:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=79434 00:10:49.263 12:30:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:49.263 12:30:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:49.263 [global] 00:10:49.263 thread=1 00:10:49.263 invalidate=1 00:10:49.263 rw=read 00:10:49.263 time_based=1 00:10:49.263 runtime=10 00:10:49.263 ioengine=libaio 00:10:49.263 direct=1 00:10:49.263 bs=4096 00:10:49.263 iodepth=1 00:10:49.263 norandommap=1 00:10:49.263 numjobs=1 00:10:49.263 00:10:49.263 [job0] 00:10:49.263 filename=/dev/nvme0n1 00:10:49.263 [job1] 00:10:49.263 filename=/dev/nvme0n2 00:10:49.263 [job2] 00:10:49.263 filename=/dev/nvme0n3 00:10:49.263 [job3] 00:10:49.263 filename=/dev/nvme0n4 00:10:49.263 Could not set queue depth (nvme0n1) 00:10:49.263 Could not set queue depth (nvme0n2) 00:10:49.263 Could not set queue depth (nvme0n3) 00:10:49.263 Could not set queue depth (nvme0n4) 00:10:49.263 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:49.263 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:49.263 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:49.263 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:49.263 fio-3.35 00:10:49.263 Starting 4 threads 00:10:52.541 12:30:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:52.541 fio: pid=79477, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:52.541 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=46206976, buflen=4096 00:10:52.541 12:30:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:52.541 fio: pid=79476, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:52.541 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=69009408, buflen=4096 00:10:52.541 12:30:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:52.541 12:30:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:52.798 fio: pid=79474, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:52.798 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=58798080, buflen=4096 00:10:53.056 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:53.056 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:53.056 fio: pid=79475, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:53.056 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=18403328, buflen=4096 00:10:53.315 00:10:53.315 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=79474: Tue Nov 19 12:30:58 2024 00:10:53.315 read: IOPS=4062, BW=15.9MiB/s (16.6MB/s)(56.1MiB/3534msec) 00:10:53.315 slat (usec): min=8, max=14432, avg=16.30, stdev=179.01 00:10:53.315 clat (usec): min=111, max=2995, avg=228.48, stdev=56.06 00:10:53.315 lat (usec): min=138, max=14636, avg=244.78, stdev=187.28 00:10:53.315 clat percentiles (usec): 00:10:53.315 | 1.00th=[ 139], 5.00th=[ 149], 10.00th=[ 157], 20.00th=[ 169], 00:10:53.315 | 30.00th=[ 200], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 251], 00:10:53.315 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 273], 95.00th=[ 281], 00:10:53.315 | 99.00th=[ 297], 99.50th=[ 318], 99.90th=[ 570], 99.95th=[ 775], 00:10:53.315 | 99.99th=[ 1631] 00:10:53.315 bw ( KiB/s): min=14379, max=18680, per=23.10%, avg=15412.50, stdev=1613.35, samples=6 00:10:53.315 iops : min= 3594, max= 4670, avg=3853.00, stdev=403.43, samples=6 00:10:53.315 lat (usec) : 250=57.59%, 500=42.27%, 750=0.08%, 1000=0.02% 00:10:53.315 lat (msec) : 2=0.03%, 4=0.01% 00:10:53.315 cpu : usr=1.27%, sys=5.09%, ctx=14372, majf=0, minf=1 00:10:53.315 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:53.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.315 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.315 issued rwts: total=14356,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.315 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:53.315 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=79475: Tue Nov 19 12:30:58 2024 00:10:53.315 read: IOPS=5495, BW=21.5MiB/s (22.5MB/s)(81.6MiB/3799msec) 00:10:53.315 slat (usec): min=7, max=15030, avg=15.46, stdev=159.23 00:10:53.315 clat (usec): min=115, max=2962, avg=165.20, stdev=38.80 00:10:53.315 lat (usec): min=126, max=15237, avg=180.67, stdev=164.63 00:10:53.315 clat percentiles (usec): 00:10:53.315 | 1.00th=[ 131], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 153], 00:10:53.315 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 163], 00:10:53.315 | 70.00th=[ 167], 80.00th=[ 172], 90.00th=[ 182], 95.00th=[ 204], 00:10:53.315 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 314], 99.95th=[ 478], 00:10:53.315 | 99.99th=[ 2040] 00:10:53.315 bw ( KiB/s): min=18503, max=23040, per=32.96%, avg=21986.57, stdev=1638.67, samples=7 00:10:53.315 iops : min= 4625, max= 5760, avg=5496.43, stdev=409.93, samples=7 00:10:53.315 lat (usec) : 250=98.01%, 500=1.94%, 750=0.02% 00:10:53.315 lat (msec) : 2=0.01%, 4=0.02% 00:10:53.315 cpu : usr=1.24%, sys=6.64%, ctx=20887, majf=0, minf=1 00:10:53.315 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:53.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.315 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.315 issued rwts: total=20878,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.315 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:53.315 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=79476: Tue Nov 19 12:30:58 2024 00:10:53.315 read: IOPS=5149, BW=20.1MiB/s (21.1MB/s)(65.8MiB/3272msec) 00:10:53.315 slat (usec): min=9, max=11603, avg=15.73, stdev=125.18 00:10:53.315 clat (usec): min=140, max=2981, avg=177.03, stdev=35.29 00:10:53.315 lat (usec): min=154, max=12191, avg=192.76, stdev=132.91 00:10:53.315 clat percentiles (usec): 00:10:53.315 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 161], 00:10:53.315 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:10:53.315 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 196], 95.00th=[ 229], 00:10:53.315 | 99.00th=[ 269], 99.50th=[ 281], 99.90th=[ 465], 99.95th=[ 644], 00:10:53.315 | 99.99th=[ 930] 00:10:53.315 bw ( KiB/s): min=20496, max=21864, per=31.81%, avg=21221.00, stdev=620.13, samples=6 00:10:53.315 iops : min= 5124, max= 5466, avg=5305.17, stdev=155.10, samples=6 00:10:53.315 lat (usec) : 250=97.64%, 500=2.28%, 750=0.04%, 1000=0.03% 00:10:53.315 lat (msec) : 4=0.01% 00:10:53.315 cpu : usr=1.44%, sys=6.85%, ctx=16852, majf=0, minf=1 00:10:53.315 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:53.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.315 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.315 issued rwts: total=16849,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.315 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:53.315 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=79477: Tue Nov 19 12:30:58 2024 00:10:53.315 read: IOPS=3808, BW=14.9MiB/s (15.6MB/s)(44.1MiB/2962msec) 00:10:53.315 slat (nsec): min=8451, max=87705, avg=12066.00, stdev=3558.13 00:10:53.315 clat (usec): min=145, max=3918, avg=249.20, stdev=69.69 00:10:53.315 lat (usec): min=161, max=3942, avg=261.27, stdev=69.20 00:10:53.315 clat percentiles (usec): 00:10:53.315 | 1.00th=[ 163], 5.00th=[ 174], 10.00th=[ 192], 20.00th=[ 239], 00:10:53.315 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 258], 00:10:53.315 | 70.00th=[ 265], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 281], 00:10:53.315 | 99.00th=[ 297], 99.50th=[ 314], 99.90th=[ 494], 99.95th=[ 824], 00:10:53.315 | 99.99th=[ 3425] 00:10:53.315 bw ( KiB/s): min=14784, max=17360, per=23.01%, avg=15353.60, stdev=1123.73, samples=5 00:10:53.315 iops : min= 3696, max= 4340, avg=3838.40, stdev=280.93, samples=5 00:10:53.315 lat (usec) : 250=42.51%, 500=57.38%, 750=0.03%, 1000=0.03% 00:10:53.315 lat (msec) : 2=0.01%, 4=0.04% 00:10:53.315 cpu : usr=1.05%, sys=4.36%, ctx=11286, majf=0, minf=2 00:10:53.315 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:53.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.315 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.315 issued rwts: total=11282,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.315 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:53.315 00:10:53.315 Run status group 0 (all jobs): 00:10:53.316 READ: bw=65.1MiB/s (68.3MB/s), 14.9MiB/s-21.5MiB/s (15.6MB/s-22.5MB/s), io=248MiB (260MB), run=2962-3799msec 00:10:53.316 00:10:53.316 Disk stats (read/write): 00:10:53.316 nvme0n1: ios=13412/0, merge=0/0, ticks=3066/0, in_queue=3066, util=95.25% 00:10:53.316 nvme0n2: ios=19762/0, merge=0/0, ticks=3336/0, in_queue=3336, util=95.40% 00:10:53.316 nvme0n3: ios=16263/0, merge=0/0, ticks=2876/0, in_queue=2876, util=96.09% 00:10:53.316 nvme0n4: ios=10941/0, merge=0/0, ticks=2590/0, in_queue=2590, util=96.46% 00:10:53.316 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:53.316 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:53.574 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:53.574 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:53.832 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:53.832 12:30:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:54.089 12:30:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:54.089 12:30:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:54.347 12:30:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:54.347 12:30:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:54.605 12:30:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:54.605 12:30:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 79434 00:10:54.605 12:30:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:54.605 12:30:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:54.605 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.605 12:30:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:54.605 12:30:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:54.605 12:30:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:54.605 12:30:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:54.605 12:30:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:54.605 12:30:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:54.605 nvmf hotplug test: fio failed as expected 00:10:54.605 12:30:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:54.605 12:30:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:54.605 12:30:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:54.605 12:30:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:54.862 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:54.862 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:54.862 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:54.862 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:54.862 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:54.862 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:54.862 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:54.862 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:54.862 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:54.862 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:54.862 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:54.862 rmmod nvme_tcp 00:10:54.862 rmmod nvme_fabrics 00:10:54.862 rmmod nvme_keyring 00:10:54.862 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:54.862 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:54.862 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:54.862 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@513 -- # '[' -n 79053 ']' 00:10:54.862 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # killprocess 79053 00:10:54.863 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 79053 ']' 00:10:54.863 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 79053 00:10:54.863 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:10:54.863 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:54.863 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79053 00:10:54.863 killing process with pid 79053 00:10:54.863 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:54.863 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:54.863 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79053' 00:10:54.863 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 79053 00:10:54.863 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 79053 00:10:55.121 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:55.121 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:55.121 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:55.121 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:55.121 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-save 00:10:55.121 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-restore 00:10:55.121 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:55.121 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:55.121 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:55.121 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:55.121 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:55.121 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:55.121 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:55.121 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:55.121 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:55.122 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:55.122 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:55.122 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:55.122 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:55.381 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:55.381 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:55.381 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:55.381 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:55.381 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.381 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:55.381 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.381 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:10:55.381 00:10:55.381 real 0m19.346s 00:10:55.381 user 1m12.546s 00:10:55.381 sys 0m10.132s 00:10:55.381 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:55.381 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.381 ************************************ 00:10:55.381 END TEST nvmf_fio_target 00:10:55.381 ************************************ 00:10:55.381 12:31:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:55.381 12:31:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:55.381 12:31:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:55.381 12:31:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:55.381 ************************************ 00:10:55.381 START TEST nvmf_bdevio 00:10:55.381 ************************************ 00:10:55.381 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:55.381 * Looking for test storage... 00:10:55.381 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:55.381 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:55.381 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:10:55.381 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:55.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.641 --rc genhtml_branch_coverage=1 00:10:55.641 --rc genhtml_function_coverage=1 00:10:55.641 --rc genhtml_legend=1 00:10:55.641 --rc geninfo_all_blocks=1 00:10:55.641 --rc geninfo_unexecuted_blocks=1 00:10:55.641 00:10:55.641 ' 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:55.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.641 --rc genhtml_branch_coverage=1 00:10:55.641 --rc genhtml_function_coverage=1 00:10:55.641 --rc genhtml_legend=1 00:10:55.641 --rc geninfo_all_blocks=1 00:10:55.641 --rc geninfo_unexecuted_blocks=1 00:10:55.641 00:10:55.641 ' 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:55.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.641 --rc genhtml_branch_coverage=1 00:10:55.641 --rc genhtml_function_coverage=1 00:10:55.641 --rc genhtml_legend=1 00:10:55.641 --rc geninfo_all_blocks=1 00:10:55.641 --rc geninfo_unexecuted_blocks=1 00:10:55.641 00:10:55.641 ' 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:55.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.641 --rc genhtml_branch_coverage=1 00:10:55.641 --rc genhtml_function_coverage=1 00:10:55.641 --rc genhtml_legend=1 00:10:55.641 --rc geninfo_all_blocks=1 00:10:55.641 --rc geninfo_unexecuted_blocks=1 00:10:55.641 00:10:55.641 ' 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:55.641 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:55.641 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:55.642 Cannot find device "nvmf_init_br" 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:55.642 Cannot find device "nvmf_init_br2" 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:55.642 Cannot find device "nvmf_tgt_br" 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:55.642 Cannot find device "nvmf_tgt_br2" 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:55.642 Cannot find device "nvmf_init_br" 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:55.642 Cannot find device "nvmf_init_br2" 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:55.642 Cannot find device "nvmf_tgt_br" 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:55.642 Cannot find device "nvmf_tgt_br2" 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:55.642 Cannot find device "nvmf_br" 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:55.642 Cannot find device "nvmf_init_if" 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:55.642 Cannot find device "nvmf_init_if2" 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:55.642 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:55.642 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:55.642 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:55.901 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:55.901 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:55.901 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:55.901 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:55.901 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:55.901 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:55.901 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:55.901 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:55.901 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:55.901 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:55.901 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:55.901 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:55.901 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:55.901 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:55.901 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:55.901 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:55.901 12:31:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:55.901 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:55.901 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:55.901 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:55.901 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:55.901 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:55.901 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:55.901 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:55.901 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:55.901 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:55.901 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:55.901 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:55.901 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:55.901 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:55.901 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:55.901 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:55.901 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:10:55.901 00:10:55.901 --- 10.0.0.3 ping statistics --- 00:10:55.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.901 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:10:55.901 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:55.901 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:55.901 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:10:55.901 00:10:55.901 --- 10.0.0.4 ping statistics --- 00:10:55.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.901 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:10:55.901 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:55.901 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:55.901 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:10:55.901 00:10:55.901 --- 10.0.0.1 ping statistics --- 00:10:55.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.901 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:10:55.901 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:55.901 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:55.901 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:10:55.901 00:10:55.902 --- 10.0.0.2 ping statistics --- 00:10:55.902 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.902 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:10:55.902 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:55.902 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@457 -- # return 0 00:10:55.902 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:55.902 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:55.902 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:55.902 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:55.902 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:55.902 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:55.902 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:55.902 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:55.902 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:55.902 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:55.902 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:55.902 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # nvmfpid=79792 00:10:55.902 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:55.902 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # waitforlisten 79792 00:10:55.902 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 79792 ']' 00:10:55.902 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.902 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:55.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.902 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.902 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:55.902 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:56.160 [2024-11-19 12:31:01.198807] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:56.160 [2024-11-19 12:31:01.198934] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:56.160 [2024-11-19 12:31:01.341559] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:56.160 [2024-11-19 12:31:01.374537] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:56.160 [2024-11-19 12:31:01.374603] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:56.160 [2024-11-19 12:31:01.374630] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:56.160 [2024-11-19 12:31:01.374637] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:56.160 [2024-11-19 12:31:01.374644] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:56.160 [2024-11-19 12:31:01.374807] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:10:56.160 [2024-11-19 12:31:01.375031] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:10:56.160 [2024-11-19 12:31:01.375707] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:56.160 [2024-11-19 12:31:01.375719] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:10:56.160 [2024-11-19 12:31:01.404422] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:56.419 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:56.419 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:10:56.419 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:56.419 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:56.419 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:56.419 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:56.419 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:56.419 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.419 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:56.419 [2024-11-19 12:31:01.500304] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:56.419 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.419 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:56.419 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.419 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:56.419 Malloc0 00:10:56.419 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.419 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:56.419 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.419 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:56.419 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.419 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:56.419 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.419 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:56.419 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.419 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:56.419 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.419 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:56.419 [2024-11-19 12:31:01.543058] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:56.419 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.419 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:56.419 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:56.419 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # config=() 00:10:56.419 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # local subsystem config 00:10:56.419 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:56.419 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:56.419 { 00:10:56.419 "params": { 00:10:56.419 "name": "Nvme$subsystem", 00:10:56.419 "trtype": "$TEST_TRANSPORT", 00:10:56.419 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:56.419 "adrfam": "ipv4", 00:10:56.419 "trsvcid": "$NVMF_PORT", 00:10:56.419 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:56.419 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:56.419 "hdgst": ${hdgst:-false}, 00:10:56.419 "ddgst": ${ddgst:-false} 00:10:56.419 }, 00:10:56.419 "method": "bdev_nvme_attach_controller" 00:10:56.419 } 00:10:56.419 EOF 00:10:56.419 )") 00:10:56.419 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # cat 00:10:56.419 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # jq . 00:10:56.419 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@581 -- # IFS=, 00:10:56.419 12:31:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:56.419 "params": { 00:10:56.419 "name": "Nvme1", 00:10:56.419 "trtype": "tcp", 00:10:56.419 "traddr": "10.0.0.3", 00:10:56.419 "adrfam": "ipv4", 00:10:56.419 "trsvcid": "4420", 00:10:56.419 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:56.419 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:56.419 "hdgst": false, 00:10:56.419 "ddgst": false 00:10:56.419 }, 00:10:56.419 "method": "bdev_nvme_attach_controller" 00:10:56.419 }' 00:10:56.420 [2024-11-19 12:31:01.607628] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:56.420 [2024-11-19 12:31:01.607768] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79826 ] 00:10:56.678 [2024-11-19 12:31:01.751008] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:56.678 [2024-11-19 12:31:01.789733] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:56.678 [2024-11-19 12:31:01.789849] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:56.678 [2024-11-19 12:31:01.789855] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.678 [2024-11-19 12:31:01.829359] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:56.678 I/O targets: 00:10:56.678 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:56.678 00:10:56.678 00:10:56.678 CUnit - A unit testing framework for C - Version 2.1-3 00:10:56.678 http://cunit.sourceforge.net/ 00:10:56.678 00:10:56.678 00:10:56.678 Suite: bdevio tests on: Nvme1n1 00:10:56.678 Test: blockdev write read block ...passed 00:10:56.678 Test: blockdev write zeroes read block ...passed 00:10:56.985 Test: blockdev write zeroes read no split ...passed 00:10:56.985 Test: blockdev write zeroes read split ...passed 00:10:56.985 Test: blockdev write zeroes read split partial ...passed 00:10:56.985 Test: blockdev reset ...[2024-11-19 12:31:01.958658] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:56.985 [2024-11-19 12:31:01.958951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1442b40 (9): Bad file descriptor 00:10:56.985 passed 00:10:56.985 Test: blockdev write read 8 blocks ...[2024-11-19 12:31:01.973525] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:56.985 passed 00:10:56.985 Test: blockdev write read size > 128k ...passed 00:10:56.985 Test: blockdev write read invalid size ...passed 00:10:56.985 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:56.985 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:56.985 Test: blockdev write read max offset ...passed 00:10:56.985 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:56.985 Test: blockdev writev readv 8 blocks ...passed 00:10:56.985 Test: blockdev writev readv 30 x 1block ...passed 00:10:56.985 Test: blockdev writev readv block ...passed 00:10:56.985 Test: blockdev writev readv size > 128k ...passed 00:10:56.985 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:56.985 Test: blockdev comparev and writev ...[2024-11-19 12:31:01.981718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:56.985 [2024-11-19 12:31:01.981927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:56.985 [2024-11-19 12:31:01.981962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:56.985 [2024-11-19 12:31:01.981977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:56.985 [2024-11-19 12:31:01.982301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:56.985 [2024-11-19 12:31:01.982329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:56.985 [2024-11-19 12:31:01.982350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:56.985 [2024-11-19 12:31:01.982363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:56.985 [2024-11-19 12:31:01.982645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:56.985 [2024-11-19 12:31:01.982691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:56.985 [2024-11-19 12:31:01.982715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:56.986 [2024-11-19 12:31:01.982727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:56.986 [2024-11-19 12:31:01.983049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:56.986 [2024-11-19 12:31:01.983074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:56.986 [2024-11-19 12:31:01.983094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:56.986 [2024-11-19 12:31:01.983106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:56.986 passed 00:10:56.986 Test: blockdev nvme passthru rw ...passed 00:10:56.986 Test: blockdev nvme passthru vendor specific ...passed 00:10:56.986 Test: blockdev nvme admin passthru ...[2024-11-19 12:31:01.984165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:56.986 [2024-11-19 12:31:01.984202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:56.986 [2024-11-19 12:31:01.984328] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:56.986 [2024-11-19 12:31:01.984353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:56.986 [2024-11-19 12:31:01.984481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:56.986 [2024-11-19 12:31:01.984505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:56.986 [2024-11-19 12:31:01.984621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:56.986 [2024-11-19 12:31:01.984646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:56.986 passed 00:10:56.986 Test: blockdev copy ...passed 00:10:56.986 00:10:56.986 Run Summary: Type Total Ran Passed Failed Inactive 00:10:56.986 suites 1 1 n/a 0 0 00:10:56.986 tests 23 23 23 0 0 00:10:56.986 asserts 152 152 152 0 n/a 00:10:56.986 00:10:56.986 Elapsed time = 0.153 seconds 00:10:56.986 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:56.986 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.986 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:56.986 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.986 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:56.986 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:56.986 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:56.986 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:56.986 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:56.986 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:56.986 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:56.986 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:56.986 rmmod nvme_tcp 00:10:56.986 rmmod nvme_fabrics 00:10:56.986 rmmod nvme_keyring 00:10:57.244 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:57.244 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:57.244 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:57.244 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@513 -- # '[' -n 79792 ']' 00:10:57.245 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # killprocess 79792 00:10:57.245 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 79792 ']' 00:10:57.245 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 79792 00:10:57.245 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:10:57.245 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:57.245 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79792 00:10:57.245 killing process with pid 79792 00:10:57.245 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:10:57.245 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:10:57.245 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79792' 00:10:57.245 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 79792 00:10:57.245 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 79792 00:10:57.245 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:57.245 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:57.245 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:57.245 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:57.245 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-save 00:10:57.245 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:57.245 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-restore 00:10:57.245 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:57.245 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:57.245 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:57.245 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:57.245 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:57.504 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:57.504 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:57.504 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:57.504 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:57.504 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:57.504 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:57.504 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:57.504 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:57.504 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:57.504 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:57.504 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:57.504 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.504 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:57.504 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.504 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:10:57.504 00:10:57.504 real 0m2.182s 00:10:57.504 user 0m5.337s 00:10:57.504 sys 0m0.773s 00:10:57.504 ************************************ 00:10:57.504 END TEST nvmf_bdevio 00:10:57.504 ************************************ 00:10:57.504 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:57.504 12:31:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:57.504 12:31:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:57.504 ************************************ 00:10:57.504 END TEST nvmf_target_core 00:10:57.504 ************************************ 00:10:57.504 00:10:57.504 real 2m29.926s 00:10:57.504 user 6m31.032s 00:10:57.504 sys 0m52.688s 00:10:57.504 12:31:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:57.504 12:31:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:57.763 12:31:02 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:57.763 12:31:02 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:57.763 12:31:02 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:57.763 12:31:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:57.763 ************************************ 00:10:57.763 START TEST nvmf_target_extra 00:10:57.763 ************************************ 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:57.763 * Looking for test storage... 00:10:57.763 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:57.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.763 --rc genhtml_branch_coverage=1 00:10:57.763 --rc genhtml_function_coverage=1 00:10:57.763 --rc genhtml_legend=1 00:10:57.763 --rc geninfo_all_blocks=1 00:10:57.763 --rc geninfo_unexecuted_blocks=1 00:10:57.763 00:10:57.763 ' 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:57.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.763 --rc genhtml_branch_coverage=1 00:10:57.763 --rc genhtml_function_coverage=1 00:10:57.763 --rc genhtml_legend=1 00:10:57.763 --rc geninfo_all_blocks=1 00:10:57.763 --rc geninfo_unexecuted_blocks=1 00:10:57.763 00:10:57.763 ' 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:57.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.763 --rc genhtml_branch_coverage=1 00:10:57.763 --rc genhtml_function_coverage=1 00:10:57.763 --rc genhtml_legend=1 00:10:57.763 --rc geninfo_all_blocks=1 00:10:57.763 --rc geninfo_unexecuted_blocks=1 00:10:57.763 00:10:57.763 ' 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:57.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.763 --rc genhtml_branch_coverage=1 00:10:57.763 --rc genhtml_function_coverage=1 00:10:57.763 --rc genhtml_legend=1 00:10:57.763 --rc geninfo_all_blocks=1 00:10:57.763 --rc geninfo_unexecuted_blocks=1 00:10:57.763 00:10:57.763 ' 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:57.763 12:31:02 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:57.764 12:31:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:57.764 12:31:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:57.764 12:31:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:57.764 12:31:03 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.764 12:31:03 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.764 12:31:03 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.764 12:31:03 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:57.764 12:31:03 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.764 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:57.764 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:57.764 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:57.764 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:57.764 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:57.764 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:57.764 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:57.764 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:57.764 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:57.764 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:57.764 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:57.764 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:57.764 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:57.764 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:10:57.764 12:31:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:57.764 12:31:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:57.764 12:31:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:57.764 12:31:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:58.023 ************************************ 00:10:58.023 START TEST nvmf_auth_target 00:10:58.023 ************************************ 00:10:58.023 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:58.023 * Looking for test storage... 00:10:58.023 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:58.023 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:58.023 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:58.023 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:10:58.023 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:58.023 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:58.023 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:58.023 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:58.023 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:58.023 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:58.023 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:58.023 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:58.023 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:58.023 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:58.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.024 --rc genhtml_branch_coverage=1 00:10:58.024 --rc genhtml_function_coverage=1 00:10:58.024 --rc genhtml_legend=1 00:10:58.024 --rc geninfo_all_blocks=1 00:10:58.024 --rc geninfo_unexecuted_blocks=1 00:10:58.024 00:10:58.024 ' 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:58.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.024 --rc genhtml_branch_coverage=1 00:10:58.024 --rc genhtml_function_coverage=1 00:10:58.024 --rc genhtml_legend=1 00:10:58.024 --rc geninfo_all_blocks=1 00:10:58.024 --rc geninfo_unexecuted_blocks=1 00:10:58.024 00:10:58.024 ' 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:58.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.024 --rc genhtml_branch_coverage=1 00:10:58.024 --rc genhtml_function_coverage=1 00:10:58.024 --rc genhtml_legend=1 00:10:58.024 --rc geninfo_all_blocks=1 00:10:58.024 --rc geninfo_unexecuted_blocks=1 00:10:58.024 00:10:58.024 ' 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:58.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.024 --rc genhtml_branch_coverage=1 00:10:58.024 --rc genhtml_function_coverage=1 00:10:58.024 --rc genhtml_legend=1 00:10:58.024 --rc geninfo_all_blocks=1 00:10:58.024 --rc geninfo_unexecuted_blocks=1 00:10:58.024 00:10:58.024 ' 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:58.024 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:58.024 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:58.025 Cannot find device "nvmf_init_br" 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:58.025 Cannot find device "nvmf_init_br2" 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:58.025 Cannot find device "nvmf_tgt_br" 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:58.025 Cannot find device "nvmf_tgt_br2" 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:10:58.025 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:58.284 Cannot find device "nvmf_init_br" 00:10:58.284 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:10:58.284 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:58.284 Cannot find device "nvmf_init_br2" 00:10:58.284 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:10:58.284 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:58.284 Cannot find device "nvmf_tgt_br" 00:10:58.284 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:10:58.284 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:58.284 Cannot find device "nvmf_tgt_br2" 00:10:58.284 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:10:58.284 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:58.284 Cannot find device "nvmf_br" 00:10:58.284 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:10:58.284 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:58.284 Cannot find device "nvmf_init_if" 00:10:58.284 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:10:58.284 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:58.284 Cannot find device "nvmf_init_if2" 00:10:58.284 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:10:58.284 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:58.284 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:58.284 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:10:58.284 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:58.284 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:58.284 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:10:58.284 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:58.284 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:58.284 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:58.284 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:58.285 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:58.285 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:58.285 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:58.285 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:58.285 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:58.285 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:58.285 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:58.285 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:58.285 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:58.285 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:58.285 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:58.285 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:58.285 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:58.285 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:58.285 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:58.285 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:58.285 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:58.544 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:58.544 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:58.544 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:58.544 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:58.544 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:58.544 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:58.544 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:58.544 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:58.544 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:58.544 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:58.544 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:58.544 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:58.544 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:58.544 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:10:58.544 00:10:58.544 --- 10.0.0.3 ping statistics --- 00:10:58.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.544 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:10:58.544 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:58.544 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:58.544 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.105 ms 00:10:58.544 00:10:58.544 --- 10.0.0.4 ping statistics --- 00:10:58.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.544 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:10:58.544 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:58.544 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:58.544 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:10:58.544 00:10:58.544 --- 10.0.0.1 ping statistics --- 00:10:58.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.545 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:10:58.545 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:58.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:58.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:10:58.545 00:10:58.545 --- 10.0.0.2 ping statistics --- 00:10:58.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.545 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:10:58.545 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:58.545 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@457 -- # return 0 00:10:58.545 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:58.545 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:58.545 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:58.545 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:58.545 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:58.545 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:58.545 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:58.545 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:10:58.545 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:58.545 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:58.545 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.545 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=80108 00:10:58.545 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:10:58.545 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 80108 00:10:58.545 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 80108 ']' 00:10:58.545 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.545 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:58.545 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.545 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:58.545 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.804 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:58.804 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:10:58.804 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:58.804 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:58.804 12:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.804 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:58.804 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=80127 00:10:58.804 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:10:58.804 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:10:58.804 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:10:58.804 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:10:58.804 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:58.804 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:10:58.804 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=null 00:10:58.804 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:10:58.804 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:58.804 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=af5ecb6af22c9e3c6617fd40d6cc4a0dca2ccbdd6acdf471 00:10:58.804 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:10:58.804 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.rNc 00:10:58.804 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key af5ecb6af22c9e3c6617fd40d6cc4a0dca2ccbdd6acdf471 0 00:10:58.804 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 af5ecb6af22c9e3c6617fd40d6cc4a0dca2ccbdd6acdf471 0 00:10:58.804 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:10:58.804 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:10:58.804 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=af5ecb6af22c9e3c6617fd40d6cc4a0dca2ccbdd6acdf471 00:10:58.804 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=0 00:10:58.804 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:10:59.064 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.rNc 00:10:59.064 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.rNc 00:10:59.064 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.rNc 00:10:59.064 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:10:59.064 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:10:59.064 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:59.064 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:10:59.064 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:10:59.064 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:10:59.064 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:59.064 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=80afb3484d4d94ad1120e2053cb5c1f04c39d5cb52586e2cca3d4a63eb02ad8c 00:10:59.064 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:10:59.064 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.56S 00:10:59.064 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 80afb3484d4d94ad1120e2053cb5c1f04c39d5cb52586e2cca3d4a63eb02ad8c 3 00:10:59.064 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 80afb3484d4d94ad1120e2053cb5c1f04c39d5cb52586e2cca3d4a63eb02ad8c 3 00:10:59.064 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:10:59.064 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=80afb3484d4d94ad1120e2053cb5c1f04c39d5cb52586e2cca3d4a63eb02ad8c 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.56S 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.56S 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.56S 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=a69486543f653ca0f41287dfa5e8abbc 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.cxi 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key a69486543f653ca0f41287dfa5e8abbc 1 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 a69486543f653ca0f41287dfa5e8abbc 1 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=a69486543f653ca0f41287dfa5e8abbc 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.cxi 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.cxi 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.cxi 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=d0b4e5af8d2e9bae0f49927d063cdcea553a8fb80ab0d7ca 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.Yt3 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key d0b4e5af8d2e9bae0f49927d063cdcea553a8fb80ab0d7ca 2 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 d0b4e5af8d2e9bae0f49927d063cdcea553a8fb80ab0d7ca 2 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=d0b4e5af8d2e9bae0f49927d063cdcea553a8fb80ab0d7ca 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.Yt3 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.Yt3 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.Yt3 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:59.065 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=69cf2acba735d696de7c3a84a99b922cf57e8ff813c9aaf9 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.6Z8 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 69cf2acba735d696de7c3a84a99b922cf57e8ff813c9aaf9 2 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 69cf2acba735d696de7c3a84a99b922cf57e8ff813c9aaf9 2 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=69cf2acba735d696de7c3a84a99b922cf57e8ff813c9aaf9 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.6Z8 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.6Z8 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.6Z8 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=89b181c9dee6187fb502fad9d07e7ddb 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.Vtw 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 89b181c9dee6187fb502fad9d07e7ddb 1 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 89b181c9dee6187fb502fad9d07e7ddb 1 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=89b181c9dee6187fb502fad9d07e7ddb 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.Vtw 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.Vtw 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Vtw 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=d1c034367ecb7cb36d5ca36b2b8404bb4da06ed6e12c73108d47ce86a2b0904b 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.mCR 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key d1c034367ecb7cb36d5ca36b2b8404bb4da06ed6e12c73108d47ce86a2b0904b 3 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 d1c034367ecb7cb36d5ca36b2b8404bb4da06ed6e12c73108d47ce86a2b0904b 3 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=d1c034367ecb7cb36d5ca36b2b8404bb4da06ed6e12c73108d47ce86a2b0904b 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.mCR 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.mCR 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.mCR 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 80108 00:10:59.325 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 80108 ']' 00:10:59.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.326 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.326 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:59.326 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.326 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:59.326 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:59.586 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:59.586 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:10:59.586 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 80127 /var/tmp/host.sock 00:10:59.586 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 80127 ']' 00:10:59.586 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:10:59.586 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:59.586 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:59.586 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:59.586 12:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.845 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:59.845 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:10:59.845 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:10:59.845 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.845 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.104 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.104 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:00.104 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.rNc 00:11:00.104 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.104 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.104 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.104 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.rNc 00:11:00.104 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.rNc 00:11:00.104 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.56S ]] 00:11:00.104 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.56S 00:11:00.104 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.104 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.363 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.363 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.56S 00:11:00.363 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.56S 00:11:00.621 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:00.621 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.cxi 00:11:00.621 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.621 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.621 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.621 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.cxi 00:11:00.621 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.cxi 00:11:00.881 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.Yt3 ]] 00:11:00.881 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Yt3 00:11:00.881 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.881 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.881 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.881 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Yt3 00:11:00.881 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Yt3 00:11:01.140 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:01.140 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.6Z8 00:11:01.140 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.140 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.140 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.140 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.6Z8 00:11:01.140 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.6Z8 00:11:01.400 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Vtw ]] 00:11:01.400 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Vtw 00:11:01.400 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.400 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.400 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.400 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Vtw 00:11:01.400 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Vtw 00:11:01.659 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:01.659 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.mCR 00:11:01.659 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.659 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.659 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.659 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.mCR 00:11:01.659 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.mCR 00:11:01.919 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:11:01.919 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:01.919 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:01.919 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:01.919 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:01.919 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:02.178 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:11:02.178 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:02.178 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:02.178 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:02.178 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:02.178 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:02.178 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:02.178 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.178 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.178 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.178 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:02.178 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:02.178 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:02.437 00:11:02.437 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:02.437 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:02.437 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:03.008 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:03.008 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:03.008 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.008 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.008 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.008 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:03.008 { 00:11:03.008 "cntlid": 1, 00:11:03.008 "qid": 0, 00:11:03.008 "state": "enabled", 00:11:03.008 "thread": "nvmf_tgt_poll_group_000", 00:11:03.008 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:11:03.008 "listen_address": { 00:11:03.008 "trtype": "TCP", 00:11:03.008 "adrfam": "IPv4", 00:11:03.008 "traddr": "10.0.0.3", 00:11:03.008 "trsvcid": "4420" 00:11:03.008 }, 00:11:03.008 "peer_address": { 00:11:03.008 "trtype": "TCP", 00:11:03.008 "adrfam": "IPv4", 00:11:03.008 "traddr": "10.0.0.1", 00:11:03.008 "trsvcid": "33144" 00:11:03.008 }, 00:11:03.008 "auth": { 00:11:03.008 "state": "completed", 00:11:03.008 "digest": "sha256", 00:11:03.008 "dhgroup": "null" 00:11:03.008 } 00:11:03.008 } 00:11:03.008 ]' 00:11:03.008 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:03.008 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:03.008 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:03.008 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:03.008 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:03.008 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:03.008 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:03.008 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:03.267 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWY1ZWNiNmFmMjJjOWUzYzY2MTdmZDQwZDZjYzRhMGRjYTJjY2JkZDZhY2RmNDcxkQB06A==: --dhchap-ctrl-secret DHHC-1:03:ODBhZmIzNDg0ZDRkOTRhZDExMjBlMjA1M2NiNWMxZjA0YzM5ZDVjYjUyNTg2ZTJjY2EzZDRhNjNlYjAyYWQ4Y+Ou8xQ=: 00:11:03.267 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:00:YWY1ZWNiNmFmMjJjOWUzYzY2MTdmZDQwZDZjYzRhMGRjYTJjY2JkZDZhY2RmNDcxkQB06A==: --dhchap-ctrl-secret DHHC-1:03:ODBhZmIzNDg0ZDRkOTRhZDExMjBlMjA1M2NiNWMxZjA0YzM5ZDVjYjUyNTg2ZTJjY2EzZDRhNjNlYjAyYWQ4Y+Ou8xQ=: 00:11:07.455 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:07.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:07.714 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:11:07.714 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.714 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.714 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.714 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:07.714 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:07.714 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:07.973 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:11:07.973 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:07.973 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:07.973 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:07.973 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:07.973 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:07.973 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:07.973 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.973 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.973 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.973 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:07.973 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:07.973 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:08.231 00:11:08.231 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:08.231 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:08.232 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:08.491 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:08.491 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:08.491 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.491 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.491 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.491 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:08.491 { 00:11:08.491 "cntlid": 3, 00:11:08.491 "qid": 0, 00:11:08.491 "state": "enabled", 00:11:08.491 "thread": "nvmf_tgt_poll_group_000", 00:11:08.491 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:11:08.491 "listen_address": { 00:11:08.491 "trtype": "TCP", 00:11:08.491 "adrfam": "IPv4", 00:11:08.491 "traddr": "10.0.0.3", 00:11:08.491 "trsvcid": "4420" 00:11:08.491 }, 00:11:08.491 "peer_address": { 00:11:08.491 "trtype": "TCP", 00:11:08.491 "adrfam": "IPv4", 00:11:08.491 "traddr": "10.0.0.1", 00:11:08.491 "trsvcid": "45078" 00:11:08.491 }, 00:11:08.491 "auth": { 00:11:08.491 "state": "completed", 00:11:08.491 "digest": "sha256", 00:11:08.491 "dhgroup": "null" 00:11:08.491 } 00:11:08.491 } 00:11:08.491 ]' 00:11:08.491 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:08.491 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:08.491 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:08.749 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:08.749 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:08.749 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:08.749 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:08.749 12:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:09.008 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY5NDg2NTQzZjY1M2NhMGY0MTI4N2RmYTVlOGFiYmNRMQ8f: --dhchap-ctrl-secret DHHC-1:02:ZDBiNGU1YWY4ZDJlOWJhZTBmNDk5MjdkMDYzY2RjZWE1NTNhOGZiODBhYjBkN2NhrXUOSg==: 00:11:09.008 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:01:YTY5NDg2NTQzZjY1M2NhMGY0MTI4N2RmYTVlOGFiYmNRMQ8f: --dhchap-ctrl-secret DHHC-1:02:ZDBiNGU1YWY4ZDJlOWJhZTBmNDk5MjdkMDYzY2RjZWE1NTNhOGZiODBhYjBkN2NhrXUOSg==: 00:11:09.576 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:09.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:09.576 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:11:09.576 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.576 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.576 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.576 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:09.576 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:09.576 12:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:10.144 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:11:10.144 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:10.144 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:10.144 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:10.144 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:10.144 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:10.144 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:10.144 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.144 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.144 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.144 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:10.144 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:10.144 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:10.403 00:11:10.403 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:10.403 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:10.403 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:10.686 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:10.686 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:10.686 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.686 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.686 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.686 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:10.686 { 00:11:10.686 "cntlid": 5, 00:11:10.686 "qid": 0, 00:11:10.686 "state": "enabled", 00:11:10.686 "thread": "nvmf_tgt_poll_group_000", 00:11:10.686 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:11:10.686 "listen_address": { 00:11:10.686 "trtype": "TCP", 00:11:10.686 "adrfam": "IPv4", 00:11:10.686 "traddr": "10.0.0.3", 00:11:10.686 "trsvcid": "4420" 00:11:10.686 }, 00:11:10.686 "peer_address": { 00:11:10.686 "trtype": "TCP", 00:11:10.686 "adrfam": "IPv4", 00:11:10.686 "traddr": "10.0.0.1", 00:11:10.686 "trsvcid": "45104" 00:11:10.686 }, 00:11:10.686 "auth": { 00:11:10.686 "state": "completed", 00:11:10.686 "digest": "sha256", 00:11:10.686 "dhgroup": "null" 00:11:10.686 } 00:11:10.686 } 00:11:10.686 ]' 00:11:10.686 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:10.686 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:10.945 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:10.945 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:10.945 12:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:10.945 12:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:10.945 12:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:10.945 12:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:11.204 12:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjljZjJhY2JhNzM1ZDY5NmRlN2MzYTg0YTk5YjkyMmNmNTdlOGZmODEzYzlhYWY5CxoIFA==: --dhchap-ctrl-secret DHHC-1:01:ODliMTgxYzlkZWU2MTg3ZmI1MDJmYWQ5ZDA3ZTdkZGKf3wvV: 00:11:11.204 12:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:02:NjljZjJhY2JhNzM1ZDY5NmRlN2MzYTg0YTk5YjkyMmNmNTdlOGZmODEzYzlhYWY5CxoIFA==: --dhchap-ctrl-secret DHHC-1:01:ODliMTgxYzlkZWU2MTg3ZmI1MDJmYWQ5ZDA3ZTdkZGKf3wvV: 00:11:11.772 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:11.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:11.772 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:11:11.772 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.772 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.031 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.031 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:12.031 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:12.031 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:12.290 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:11:12.290 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:12.290 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:12.290 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:12.290 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:12.290 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:12.290 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key3 00:11:12.290 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.290 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.290 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.290 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:12.290 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:12.290 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:12.549 00:11:12.549 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:12.549 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:12.549 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:12.808 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:12.808 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:12.808 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.808 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.808 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.808 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:12.808 { 00:11:12.808 "cntlid": 7, 00:11:12.808 "qid": 0, 00:11:12.808 "state": "enabled", 00:11:12.808 "thread": "nvmf_tgt_poll_group_000", 00:11:12.808 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:11:12.808 "listen_address": { 00:11:12.808 "trtype": "TCP", 00:11:12.808 "adrfam": "IPv4", 00:11:12.808 "traddr": "10.0.0.3", 00:11:12.809 "trsvcid": "4420" 00:11:12.809 }, 00:11:12.809 "peer_address": { 00:11:12.809 "trtype": "TCP", 00:11:12.809 "adrfam": "IPv4", 00:11:12.809 "traddr": "10.0.0.1", 00:11:12.809 "trsvcid": "45130" 00:11:12.809 }, 00:11:12.809 "auth": { 00:11:12.809 "state": "completed", 00:11:12.809 "digest": "sha256", 00:11:12.809 "dhgroup": "null" 00:11:12.809 } 00:11:12.809 } 00:11:12.809 ]' 00:11:12.809 12:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:12.809 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:12.809 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:12.809 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:12.809 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:13.068 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:13.068 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:13.068 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:13.327 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDFjMDM0MzY3ZWNiN2NiMzZkNWNhMzZiMmI4NDA0YmI0ZGEwNmVkNmUxMmM3MzEwOGQ0N2NlODZhMmIwOTA0YodrmJs=: 00:11:13.327 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:03:ZDFjMDM0MzY3ZWNiN2NiMzZkNWNhMzZiMmI4NDA0YmI0ZGEwNmVkNmUxMmM3MzEwOGQ0N2NlODZhMmIwOTA0YodrmJs=: 00:11:13.900 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:13.900 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:13.900 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:11:13.900 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.900 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.900 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.900 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:13.900 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:13.900 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:13.900 12:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:14.158 12:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:11:14.158 12:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:14.158 12:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:14.158 12:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:14.158 12:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:14.158 12:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:14.158 12:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:14.159 12:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.159 12:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.159 12:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.159 12:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:14.159 12:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:14.159 12:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:14.417 00:11:14.417 12:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:14.417 12:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:14.417 12:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:14.675 12:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:14.675 12:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:14.675 12:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.675 12:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.675 12:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.675 12:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:14.675 { 00:11:14.675 "cntlid": 9, 00:11:14.675 "qid": 0, 00:11:14.675 "state": "enabled", 00:11:14.675 "thread": "nvmf_tgt_poll_group_000", 00:11:14.675 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:11:14.675 "listen_address": { 00:11:14.675 "trtype": "TCP", 00:11:14.675 "adrfam": "IPv4", 00:11:14.675 "traddr": "10.0.0.3", 00:11:14.675 "trsvcid": "4420" 00:11:14.675 }, 00:11:14.675 "peer_address": { 00:11:14.675 "trtype": "TCP", 00:11:14.675 "adrfam": "IPv4", 00:11:14.675 "traddr": "10.0.0.1", 00:11:14.675 "trsvcid": "45862" 00:11:14.675 }, 00:11:14.675 "auth": { 00:11:14.675 "state": "completed", 00:11:14.675 "digest": "sha256", 00:11:14.675 "dhgroup": "ffdhe2048" 00:11:14.675 } 00:11:14.675 } 00:11:14.675 ]' 00:11:14.675 12:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:14.934 12:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:14.934 12:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:14.934 12:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:14.934 12:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:14.934 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:14.934 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:14.934 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:15.192 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWY1ZWNiNmFmMjJjOWUzYzY2MTdmZDQwZDZjYzRhMGRjYTJjY2JkZDZhY2RmNDcxkQB06A==: --dhchap-ctrl-secret DHHC-1:03:ODBhZmIzNDg0ZDRkOTRhZDExMjBlMjA1M2NiNWMxZjA0YzM5ZDVjYjUyNTg2ZTJjY2EzZDRhNjNlYjAyYWQ4Y+Ou8xQ=: 00:11:15.193 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:00:YWY1ZWNiNmFmMjJjOWUzYzY2MTdmZDQwZDZjYzRhMGRjYTJjY2JkZDZhY2RmNDcxkQB06A==: --dhchap-ctrl-secret DHHC-1:03:ODBhZmIzNDg0ZDRkOTRhZDExMjBlMjA1M2NiNWMxZjA0YzM5ZDVjYjUyNTg2ZTJjY2EzZDRhNjNlYjAyYWQ4Y+Ou8xQ=: 00:11:15.759 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:15.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:15.759 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:11:15.759 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.759 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.759 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.759 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:15.759 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:15.759 12:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:16.018 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:11:16.018 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:16.018 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:16.018 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:16.018 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:16.018 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:16.018 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:16.018 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.018 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.018 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.018 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:16.018 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:16.018 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:16.585 00:11:16.585 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:16.585 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:16.585 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:16.844 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:16.844 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:16.844 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.844 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.844 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.844 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:16.844 { 00:11:16.844 "cntlid": 11, 00:11:16.844 "qid": 0, 00:11:16.844 "state": "enabled", 00:11:16.844 "thread": "nvmf_tgt_poll_group_000", 00:11:16.844 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:11:16.844 "listen_address": { 00:11:16.844 "trtype": "TCP", 00:11:16.844 "adrfam": "IPv4", 00:11:16.844 "traddr": "10.0.0.3", 00:11:16.844 "trsvcid": "4420" 00:11:16.844 }, 00:11:16.844 "peer_address": { 00:11:16.844 "trtype": "TCP", 00:11:16.844 "adrfam": "IPv4", 00:11:16.844 "traddr": "10.0.0.1", 00:11:16.844 "trsvcid": "45878" 00:11:16.844 }, 00:11:16.844 "auth": { 00:11:16.844 "state": "completed", 00:11:16.844 "digest": "sha256", 00:11:16.844 "dhgroup": "ffdhe2048" 00:11:16.844 } 00:11:16.844 } 00:11:16.844 ]' 00:11:16.844 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:16.844 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:16.844 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:16.844 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:16.844 12:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:16.844 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:16.844 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:16.844 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:17.102 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY5NDg2NTQzZjY1M2NhMGY0MTI4N2RmYTVlOGFiYmNRMQ8f: --dhchap-ctrl-secret DHHC-1:02:ZDBiNGU1YWY4ZDJlOWJhZTBmNDk5MjdkMDYzY2RjZWE1NTNhOGZiODBhYjBkN2NhrXUOSg==: 00:11:17.102 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:01:YTY5NDg2NTQzZjY1M2NhMGY0MTI4N2RmYTVlOGFiYmNRMQ8f: --dhchap-ctrl-secret DHHC-1:02:ZDBiNGU1YWY4ZDJlOWJhZTBmNDk5MjdkMDYzY2RjZWE1NTNhOGZiODBhYjBkN2NhrXUOSg==: 00:11:18.037 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:18.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:18.037 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:11:18.037 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.037 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.037 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.037 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:18.037 12:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:18.037 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:18.295 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:11:18.295 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:18.295 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:18.295 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:18.295 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:18.295 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:18.295 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:18.295 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.295 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.295 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.295 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:18.295 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:18.295 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:18.553 00:11:18.553 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:18.553 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:18.553 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:18.812 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:18.812 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:18.812 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.812 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.812 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.812 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:18.812 { 00:11:18.812 "cntlid": 13, 00:11:18.812 "qid": 0, 00:11:18.812 "state": "enabled", 00:11:18.812 "thread": "nvmf_tgt_poll_group_000", 00:11:18.812 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:11:18.812 "listen_address": { 00:11:18.812 "trtype": "TCP", 00:11:18.812 "adrfam": "IPv4", 00:11:18.812 "traddr": "10.0.0.3", 00:11:18.812 "trsvcid": "4420" 00:11:18.812 }, 00:11:18.812 "peer_address": { 00:11:18.812 "trtype": "TCP", 00:11:18.812 "adrfam": "IPv4", 00:11:18.812 "traddr": "10.0.0.1", 00:11:18.812 "trsvcid": "45902" 00:11:18.812 }, 00:11:18.812 "auth": { 00:11:18.812 "state": "completed", 00:11:18.812 "digest": "sha256", 00:11:18.812 "dhgroup": "ffdhe2048" 00:11:18.812 } 00:11:18.812 } 00:11:18.812 ]' 00:11:18.812 12:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:18.812 12:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:18.812 12:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:18.812 12:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:18.812 12:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:19.071 12:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:19.071 12:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:19.071 12:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:19.330 12:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjljZjJhY2JhNzM1ZDY5NmRlN2MzYTg0YTk5YjkyMmNmNTdlOGZmODEzYzlhYWY5CxoIFA==: --dhchap-ctrl-secret DHHC-1:01:ODliMTgxYzlkZWU2MTg3ZmI1MDJmYWQ5ZDA3ZTdkZGKf3wvV: 00:11:19.330 12:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:02:NjljZjJhY2JhNzM1ZDY5NmRlN2MzYTg0YTk5YjkyMmNmNTdlOGZmODEzYzlhYWY5CxoIFA==: --dhchap-ctrl-secret DHHC-1:01:ODliMTgxYzlkZWU2MTg3ZmI1MDJmYWQ5ZDA3ZTdkZGKf3wvV: 00:11:19.897 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:19.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:19.897 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:11:19.897 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.897 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.897 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.897 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:19.897 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:19.897 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:20.464 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:11:20.464 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:20.464 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:20.465 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:20.465 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:20.465 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:20.465 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key3 00:11:20.465 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.465 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.465 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.465 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:20.465 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:20.465 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:20.724 00:11:20.724 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:20.724 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:20.724 12:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:20.983 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:20.983 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:20.983 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.983 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.983 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.983 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:20.983 { 00:11:20.983 "cntlid": 15, 00:11:20.983 "qid": 0, 00:11:20.983 "state": "enabled", 00:11:20.983 "thread": "nvmf_tgt_poll_group_000", 00:11:20.983 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:11:20.983 "listen_address": { 00:11:20.983 "trtype": "TCP", 00:11:20.983 "adrfam": "IPv4", 00:11:20.983 "traddr": "10.0.0.3", 00:11:20.983 "trsvcid": "4420" 00:11:20.983 }, 00:11:20.983 "peer_address": { 00:11:20.983 "trtype": "TCP", 00:11:20.983 "adrfam": "IPv4", 00:11:20.983 "traddr": "10.0.0.1", 00:11:20.983 "trsvcid": "45918" 00:11:20.983 }, 00:11:20.983 "auth": { 00:11:20.983 "state": "completed", 00:11:20.983 "digest": "sha256", 00:11:20.983 "dhgroup": "ffdhe2048" 00:11:20.983 } 00:11:20.983 } 00:11:20.983 ]' 00:11:20.983 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:20.983 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:20.983 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:20.983 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:20.983 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:20.983 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:20.983 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:20.983 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:21.243 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDFjMDM0MzY3ZWNiN2NiMzZkNWNhMzZiMmI4NDA0YmI0ZGEwNmVkNmUxMmM3MzEwOGQ0N2NlODZhMmIwOTA0YodrmJs=: 00:11:21.243 12:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:03:ZDFjMDM0MzY3ZWNiN2NiMzZkNWNhMzZiMmI4NDA0YmI0ZGEwNmVkNmUxMmM3MzEwOGQ0N2NlODZhMmIwOTA0YodrmJs=: 00:11:22.179 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:22.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:22.179 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:11:22.179 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.179 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.179 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.179 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:22.179 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:22.179 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:22.179 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:22.179 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:11:22.179 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:22.179 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:22.179 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:22.179 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:22.179 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:22.180 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:22.180 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.180 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.180 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.180 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:22.180 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:22.180 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:22.748 00:11:22.748 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:22.748 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:22.748 12:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:23.007 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:23.007 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:23.007 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.007 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.007 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.007 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:23.007 { 00:11:23.007 "cntlid": 17, 00:11:23.007 "qid": 0, 00:11:23.007 "state": "enabled", 00:11:23.007 "thread": "nvmf_tgt_poll_group_000", 00:11:23.007 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:11:23.007 "listen_address": { 00:11:23.007 "trtype": "TCP", 00:11:23.007 "adrfam": "IPv4", 00:11:23.007 "traddr": "10.0.0.3", 00:11:23.007 "trsvcid": "4420" 00:11:23.007 }, 00:11:23.007 "peer_address": { 00:11:23.007 "trtype": "TCP", 00:11:23.007 "adrfam": "IPv4", 00:11:23.007 "traddr": "10.0.0.1", 00:11:23.007 "trsvcid": "45934" 00:11:23.007 }, 00:11:23.007 "auth": { 00:11:23.007 "state": "completed", 00:11:23.007 "digest": "sha256", 00:11:23.007 "dhgroup": "ffdhe3072" 00:11:23.007 } 00:11:23.007 } 00:11:23.007 ]' 00:11:23.007 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:23.007 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:23.007 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:23.007 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:23.007 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:23.007 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:23.007 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:23.007 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:23.273 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWY1ZWNiNmFmMjJjOWUzYzY2MTdmZDQwZDZjYzRhMGRjYTJjY2JkZDZhY2RmNDcxkQB06A==: --dhchap-ctrl-secret DHHC-1:03:ODBhZmIzNDg0ZDRkOTRhZDExMjBlMjA1M2NiNWMxZjA0YzM5ZDVjYjUyNTg2ZTJjY2EzZDRhNjNlYjAyYWQ4Y+Ou8xQ=: 00:11:23.273 12:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:00:YWY1ZWNiNmFmMjJjOWUzYzY2MTdmZDQwZDZjYzRhMGRjYTJjY2JkZDZhY2RmNDcxkQB06A==: --dhchap-ctrl-secret DHHC-1:03:ODBhZmIzNDg0ZDRkOTRhZDExMjBlMjA1M2NiNWMxZjA0YzM5ZDVjYjUyNTg2ZTJjY2EzZDRhNjNlYjAyYWQ4Y+Ou8xQ=: 00:11:23.902 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:23.902 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:23.902 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:11:23.902 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.902 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.902 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.902 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:23.902 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:23.902 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:24.160 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:11:24.160 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:24.160 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:24.160 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:24.160 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:24.160 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:24.160 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:24.160 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.160 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.160 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.160 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:24.160 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:24.160 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:24.727 00:11:24.727 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:24.727 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:24.727 12:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:24.986 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:24.986 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:24.986 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.986 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.986 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.986 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:24.986 { 00:11:24.986 "cntlid": 19, 00:11:24.986 "qid": 0, 00:11:24.986 "state": "enabled", 00:11:24.986 "thread": "nvmf_tgt_poll_group_000", 00:11:24.986 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:11:24.986 "listen_address": { 00:11:24.986 "trtype": "TCP", 00:11:24.986 "adrfam": "IPv4", 00:11:24.986 "traddr": "10.0.0.3", 00:11:24.986 "trsvcid": "4420" 00:11:24.986 }, 00:11:24.986 "peer_address": { 00:11:24.986 "trtype": "TCP", 00:11:24.986 "adrfam": "IPv4", 00:11:24.986 "traddr": "10.0.0.1", 00:11:24.986 "trsvcid": "57426" 00:11:24.986 }, 00:11:24.986 "auth": { 00:11:24.986 "state": "completed", 00:11:24.986 "digest": "sha256", 00:11:24.986 "dhgroup": "ffdhe3072" 00:11:24.986 } 00:11:24.986 } 00:11:24.986 ]' 00:11:24.986 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:24.986 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:24.986 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:24.986 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:24.986 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:24.986 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:24.986 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:24.986 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:25.245 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY5NDg2NTQzZjY1M2NhMGY0MTI4N2RmYTVlOGFiYmNRMQ8f: --dhchap-ctrl-secret DHHC-1:02:ZDBiNGU1YWY4ZDJlOWJhZTBmNDk5MjdkMDYzY2RjZWE1NTNhOGZiODBhYjBkN2NhrXUOSg==: 00:11:25.245 12:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:01:YTY5NDg2NTQzZjY1M2NhMGY0MTI4N2RmYTVlOGFiYmNRMQ8f: --dhchap-ctrl-secret DHHC-1:02:ZDBiNGU1YWY4ZDJlOWJhZTBmNDk5MjdkMDYzY2RjZWE1NTNhOGZiODBhYjBkN2NhrXUOSg==: 00:11:26.182 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:26.182 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:26.182 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:11:26.182 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.182 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.182 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.182 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:26.182 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:26.182 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:26.441 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:11:26.441 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:26.441 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:26.441 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:26.441 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:26.441 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:26.442 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:26.442 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.442 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.442 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.442 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:26.442 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:26.442 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:26.700 00:11:26.700 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:26.700 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:26.700 12:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:26.959 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:26.959 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:26.959 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.959 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.959 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.959 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:26.959 { 00:11:26.959 "cntlid": 21, 00:11:26.959 "qid": 0, 00:11:26.959 "state": "enabled", 00:11:26.959 "thread": "nvmf_tgt_poll_group_000", 00:11:26.959 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:11:26.959 "listen_address": { 00:11:26.959 "trtype": "TCP", 00:11:26.959 "adrfam": "IPv4", 00:11:26.959 "traddr": "10.0.0.3", 00:11:26.959 "trsvcid": "4420" 00:11:26.959 }, 00:11:26.959 "peer_address": { 00:11:26.959 "trtype": "TCP", 00:11:26.959 "adrfam": "IPv4", 00:11:26.959 "traddr": "10.0.0.1", 00:11:26.959 "trsvcid": "57454" 00:11:26.959 }, 00:11:26.959 "auth": { 00:11:26.959 "state": "completed", 00:11:26.959 "digest": "sha256", 00:11:26.959 "dhgroup": "ffdhe3072" 00:11:26.959 } 00:11:26.959 } 00:11:26.959 ]' 00:11:26.959 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:26.959 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:26.960 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:26.960 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:26.960 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:27.220 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:27.220 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:27.220 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:27.481 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjljZjJhY2JhNzM1ZDY5NmRlN2MzYTg0YTk5YjkyMmNmNTdlOGZmODEzYzlhYWY5CxoIFA==: --dhchap-ctrl-secret DHHC-1:01:ODliMTgxYzlkZWU2MTg3ZmI1MDJmYWQ5ZDA3ZTdkZGKf3wvV: 00:11:27.481 12:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:02:NjljZjJhY2JhNzM1ZDY5NmRlN2MzYTg0YTk5YjkyMmNmNTdlOGZmODEzYzlhYWY5CxoIFA==: --dhchap-ctrl-secret DHHC-1:01:ODliMTgxYzlkZWU2MTg3ZmI1MDJmYWQ5ZDA3ZTdkZGKf3wvV: 00:11:28.049 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:28.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:28.049 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:11:28.049 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.049 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.049 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.049 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:28.049 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:28.049 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:28.308 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:11:28.308 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:28.308 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:28.308 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:28.308 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:28.308 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:28.308 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key3 00:11:28.308 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.308 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.308 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.308 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:28.309 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:28.309 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:28.568 00:11:28.568 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:28.568 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:28.568 12:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:28.827 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:28.827 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:28.827 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.827 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.827 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.827 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:28.827 { 00:11:28.827 "cntlid": 23, 00:11:28.827 "qid": 0, 00:11:28.827 "state": "enabled", 00:11:28.827 "thread": "nvmf_tgt_poll_group_000", 00:11:28.827 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:11:28.827 "listen_address": { 00:11:28.827 "trtype": "TCP", 00:11:28.827 "adrfam": "IPv4", 00:11:28.827 "traddr": "10.0.0.3", 00:11:28.827 "trsvcid": "4420" 00:11:28.827 }, 00:11:28.827 "peer_address": { 00:11:28.827 "trtype": "TCP", 00:11:28.827 "adrfam": "IPv4", 00:11:28.827 "traddr": "10.0.0.1", 00:11:28.827 "trsvcid": "57476" 00:11:28.827 }, 00:11:28.827 "auth": { 00:11:28.827 "state": "completed", 00:11:28.827 "digest": "sha256", 00:11:28.827 "dhgroup": "ffdhe3072" 00:11:28.827 } 00:11:28.827 } 00:11:28.827 ]' 00:11:28.827 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:28.827 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:28.827 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:29.086 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:29.086 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:29.086 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:29.086 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:29.086 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:29.345 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDFjMDM0MzY3ZWNiN2NiMzZkNWNhMzZiMmI4NDA0YmI0ZGEwNmVkNmUxMmM3MzEwOGQ0N2NlODZhMmIwOTA0YodrmJs=: 00:11:29.345 12:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:03:ZDFjMDM0MzY3ZWNiN2NiMzZkNWNhMzZiMmI4NDA0YmI0ZGEwNmVkNmUxMmM3MzEwOGQ0N2NlODZhMmIwOTA0YodrmJs=: 00:11:29.912 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:30.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:30.170 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:11:30.170 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.170 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.170 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.170 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:30.170 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:30.170 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:30.170 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:30.428 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:11:30.428 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:30.428 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:30.428 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:30.428 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:30.428 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:30.428 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:30.428 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.428 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.428 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.428 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:30.428 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:30.428 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:30.686 00:11:30.686 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:30.686 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:30.686 12:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:30.945 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:30.945 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:30.945 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.945 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.945 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.945 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:30.945 { 00:11:30.945 "cntlid": 25, 00:11:30.945 "qid": 0, 00:11:30.945 "state": "enabled", 00:11:30.945 "thread": "nvmf_tgt_poll_group_000", 00:11:30.945 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:11:30.945 "listen_address": { 00:11:30.945 "trtype": "TCP", 00:11:30.945 "adrfam": "IPv4", 00:11:30.945 "traddr": "10.0.0.3", 00:11:30.945 "trsvcid": "4420" 00:11:30.945 }, 00:11:30.945 "peer_address": { 00:11:30.945 "trtype": "TCP", 00:11:30.945 "adrfam": "IPv4", 00:11:30.945 "traddr": "10.0.0.1", 00:11:30.945 "trsvcid": "57508" 00:11:30.945 }, 00:11:30.945 "auth": { 00:11:30.945 "state": "completed", 00:11:30.945 "digest": "sha256", 00:11:30.945 "dhgroup": "ffdhe4096" 00:11:30.945 } 00:11:30.945 } 00:11:30.945 ]' 00:11:30.945 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:31.204 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:31.204 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:31.204 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:31.204 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:31.204 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:31.204 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:31.204 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:31.461 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWY1ZWNiNmFmMjJjOWUzYzY2MTdmZDQwZDZjYzRhMGRjYTJjY2JkZDZhY2RmNDcxkQB06A==: --dhchap-ctrl-secret DHHC-1:03:ODBhZmIzNDg0ZDRkOTRhZDExMjBlMjA1M2NiNWMxZjA0YzM5ZDVjYjUyNTg2ZTJjY2EzZDRhNjNlYjAyYWQ4Y+Ou8xQ=: 00:11:31.461 12:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:00:YWY1ZWNiNmFmMjJjOWUzYzY2MTdmZDQwZDZjYzRhMGRjYTJjY2JkZDZhY2RmNDcxkQB06A==: --dhchap-ctrl-secret DHHC-1:03:ODBhZmIzNDg0ZDRkOTRhZDExMjBlMjA1M2NiNWMxZjA0YzM5ZDVjYjUyNTg2ZTJjY2EzZDRhNjNlYjAyYWQ4Y+Ou8xQ=: 00:11:32.396 12:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:32.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:32.396 12:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:11:32.396 12:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.396 12:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.396 12:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.396 12:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:32.396 12:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:32.396 12:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:32.655 12:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:11:32.655 12:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:32.655 12:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:32.655 12:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:32.655 12:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:32.655 12:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:32.655 12:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:32.655 12:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.655 12:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.655 12:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.655 12:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:32.655 12:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:32.655 12:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:32.913 00:11:32.913 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:32.913 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:32.913 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:33.482 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:33.482 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:33.482 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.482 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.482 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.482 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:33.482 { 00:11:33.482 "cntlid": 27, 00:11:33.482 "qid": 0, 00:11:33.482 "state": "enabled", 00:11:33.482 "thread": "nvmf_tgt_poll_group_000", 00:11:33.482 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:11:33.482 "listen_address": { 00:11:33.482 "trtype": "TCP", 00:11:33.482 "adrfam": "IPv4", 00:11:33.482 "traddr": "10.0.0.3", 00:11:33.482 "trsvcid": "4420" 00:11:33.482 }, 00:11:33.482 "peer_address": { 00:11:33.482 "trtype": "TCP", 00:11:33.482 "adrfam": "IPv4", 00:11:33.482 "traddr": "10.0.0.1", 00:11:33.482 "trsvcid": "57526" 00:11:33.482 }, 00:11:33.482 "auth": { 00:11:33.482 "state": "completed", 00:11:33.482 "digest": "sha256", 00:11:33.482 "dhgroup": "ffdhe4096" 00:11:33.482 } 00:11:33.482 } 00:11:33.482 ]' 00:11:33.482 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:33.482 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:33.482 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:33.482 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:33.482 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:33.482 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:33.482 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:33.482 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:33.741 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY5NDg2NTQzZjY1M2NhMGY0MTI4N2RmYTVlOGFiYmNRMQ8f: --dhchap-ctrl-secret DHHC-1:02:ZDBiNGU1YWY4ZDJlOWJhZTBmNDk5MjdkMDYzY2RjZWE1NTNhOGZiODBhYjBkN2NhrXUOSg==: 00:11:33.741 12:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:01:YTY5NDg2NTQzZjY1M2NhMGY0MTI4N2RmYTVlOGFiYmNRMQ8f: --dhchap-ctrl-secret DHHC-1:02:ZDBiNGU1YWY4ZDJlOWJhZTBmNDk5MjdkMDYzY2RjZWE1NTNhOGZiODBhYjBkN2NhrXUOSg==: 00:11:34.675 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:34.675 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:34.675 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:11:34.675 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.675 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.675 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.675 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:34.675 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:34.675 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:34.933 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:11:34.933 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:34.933 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:34.933 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:34.933 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:34.933 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:34.933 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:34.933 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.933 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.933 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.933 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:34.933 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:34.933 12:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:35.192 00:11:35.192 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:35.192 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:35.192 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:35.450 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:35.450 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:35.450 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.451 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.709 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.709 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:35.709 { 00:11:35.709 "cntlid": 29, 00:11:35.709 "qid": 0, 00:11:35.709 "state": "enabled", 00:11:35.709 "thread": "nvmf_tgt_poll_group_000", 00:11:35.709 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:11:35.709 "listen_address": { 00:11:35.709 "trtype": "TCP", 00:11:35.709 "adrfam": "IPv4", 00:11:35.709 "traddr": "10.0.0.3", 00:11:35.709 "trsvcid": "4420" 00:11:35.709 }, 00:11:35.709 "peer_address": { 00:11:35.709 "trtype": "TCP", 00:11:35.709 "adrfam": "IPv4", 00:11:35.709 "traddr": "10.0.0.1", 00:11:35.709 "trsvcid": "38852" 00:11:35.709 }, 00:11:35.709 "auth": { 00:11:35.709 "state": "completed", 00:11:35.709 "digest": "sha256", 00:11:35.709 "dhgroup": "ffdhe4096" 00:11:35.709 } 00:11:35.709 } 00:11:35.709 ]' 00:11:35.709 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:35.709 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:35.709 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:35.709 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:35.709 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:35.709 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:35.709 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:35.709 12:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:35.967 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjljZjJhY2JhNzM1ZDY5NmRlN2MzYTg0YTk5YjkyMmNmNTdlOGZmODEzYzlhYWY5CxoIFA==: --dhchap-ctrl-secret DHHC-1:01:ODliMTgxYzlkZWU2MTg3ZmI1MDJmYWQ5ZDA3ZTdkZGKf3wvV: 00:11:35.967 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:02:NjljZjJhY2JhNzM1ZDY5NmRlN2MzYTg0YTk5YjkyMmNmNTdlOGZmODEzYzlhYWY5CxoIFA==: --dhchap-ctrl-secret DHHC-1:01:ODliMTgxYzlkZWU2MTg3ZmI1MDJmYWQ5ZDA3ZTdkZGKf3wvV: 00:11:36.902 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:36.902 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:36.902 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:11:36.902 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.902 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.902 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.902 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:36.902 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:36.902 12:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:37.161 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:11:37.161 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:37.161 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:37.161 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:37.161 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:37.161 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:37.161 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key3 00:11:37.161 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.161 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.161 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.161 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:37.161 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:37.161 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:37.420 00:11:37.420 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:37.420 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:37.420 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:37.988 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:37.988 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:37.988 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.988 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.988 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.988 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:37.988 { 00:11:37.988 "cntlid": 31, 00:11:37.988 "qid": 0, 00:11:37.988 "state": "enabled", 00:11:37.988 "thread": "nvmf_tgt_poll_group_000", 00:11:37.988 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:11:37.988 "listen_address": { 00:11:37.988 "trtype": "TCP", 00:11:37.988 "adrfam": "IPv4", 00:11:37.988 "traddr": "10.0.0.3", 00:11:37.988 "trsvcid": "4420" 00:11:37.988 }, 00:11:37.988 "peer_address": { 00:11:37.988 "trtype": "TCP", 00:11:37.988 "adrfam": "IPv4", 00:11:37.988 "traddr": "10.0.0.1", 00:11:37.988 "trsvcid": "38870" 00:11:37.988 }, 00:11:37.988 "auth": { 00:11:37.988 "state": "completed", 00:11:37.988 "digest": "sha256", 00:11:37.988 "dhgroup": "ffdhe4096" 00:11:37.988 } 00:11:37.988 } 00:11:37.988 ]' 00:11:37.988 12:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:37.988 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:37.988 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:37.988 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:37.988 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:37.988 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:37.988 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:37.988 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:38.246 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDFjMDM0MzY3ZWNiN2NiMzZkNWNhMzZiMmI4NDA0YmI0ZGEwNmVkNmUxMmM3MzEwOGQ0N2NlODZhMmIwOTA0YodrmJs=: 00:11:38.246 12:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:03:ZDFjMDM0MzY3ZWNiN2NiMzZkNWNhMzZiMmI4NDA0YmI0ZGEwNmVkNmUxMmM3MzEwOGQ0N2NlODZhMmIwOTA0YodrmJs=: 00:11:39.183 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:39.183 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:39.183 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:11:39.183 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.183 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.183 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.183 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:39.183 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:39.183 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:39.183 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:39.443 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:11:39.443 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:39.443 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:39.443 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:39.443 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:39.443 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:39.443 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:39.443 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.443 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.443 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.443 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:39.443 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:39.443 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:39.702 00:11:39.702 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:39.702 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:39.702 12:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:39.961 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:39.961 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:39.961 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.961 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.221 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.221 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:40.221 { 00:11:40.221 "cntlid": 33, 00:11:40.221 "qid": 0, 00:11:40.221 "state": "enabled", 00:11:40.221 "thread": "nvmf_tgt_poll_group_000", 00:11:40.221 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:11:40.221 "listen_address": { 00:11:40.221 "trtype": "TCP", 00:11:40.221 "adrfam": "IPv4", 00:11:40.221 "traddr": "10.0.0.3", 00:11:40.221 "trsvcid": "4420" 00:11:40.221 }, 00:11:40.221 "peer_address": { 00:11:40.221 "trtype": "TCP", 00:11:40.221 "adrfam": "IPv4", 00:11:40.221 "traddr": "10.0.0.1", 00:11:40.221 "trsvcid": "38884" 00:11:40.221 }, 00:11:40.221 "auth": { 00:11:40.221 "state": "completed", 00:11:40.221 "digest": "sha256", 00:11:40.221 "dhgroup": "ffdhe6144" 00:11:40.221 } 00:11:40.221 } 00:11:40.221 ]' 00:11:40.221 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:40.221 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:40.221 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:40.221 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:40.221 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:40.221 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:40.221 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:40.221 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:40.480 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWY1ZWNiNmFmMjJjOWUzYzY2MTdmZDQwZDZjYzRhMGRjYTJjY2JkZDZhY2RmNDcxkQB06A==: --dhchap-ctrl-secret DHHC-1:03:ODBhZmIzNDg0ZDRkOTRhZDExMjBlMjA1M2NiNWMxZjA0YzM5ZDVjYjUyNTg2ZTJjY2EzZDRhNjNlYjAyYWQ4Y+Ou8xQ=: 00:11:40.480 12:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:00:YWY1ZWNiNmFmMjJjOWUzYzY2MTdmZDQwZDZjYzRhMGRjYTJjY2JkZDZhY2RmNDcxkQB06A==: --dhchap-ctrl-secret DHHC-1:03:ODBhZmIzNDg0ZDRkOTRhZDExMjBlMjA1M2NiNWMxZjA0YzM5ZDVjYjUyNTg2ZTJjY2EzZDRhNjNlYjAyYWQ4Y+Ou8xQ=: 00:11:41.432 12:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:41.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:41.432 12:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:11:41.432 12:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.432 12:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.432 12:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.432 12:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:41.432 12:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:41.432 12:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:41.432 12:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:11:41.432 12:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:41.432 12:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:41.432 12:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:41.432 12:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:41.432 12:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:41.432 12:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:41.432 12:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.432 12:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.432 12:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.432 12:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:41.432 12:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:41.432 12:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:42.000 00:11:42.000 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:42.000 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:42.000 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:42.259 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:42.259 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:42.259 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.259 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.259 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.259 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:42.259 { 00:11:42.259 "cntlid": 35, 00:11:42.259 "qid": 0, 00:11:42.259 "state": "enabled", 00:11:42.259 "thread": "nvmf_tgt_poll_group_000", 00:11:42.259 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:11:42.259 "listen_address": { 00:11:42.259 "trtype": "TCP", 00:11:42.259 "adrfam": "IPv4", 00:11:42.259 "traddr": "10.0.0.3", 00:11:42.259 "trsvcid": "4420" 00:11:42.259 }, 00:11:42.259 "peer_address": { 00:11:42.259 "trtype": "TCP", 00:11:42.259 "adrfam": "IPv4", 00:11:42.259 "traddr": "10.0.0.1", 00:11:42.259 "trsvcid": "38894" 00:11:42.259 }, 00:11:42.259 "auth": { 00:11:42.259 "state": "completed", 00:11:42.259 "digest": "sha256", 00:11:42.259 "dhgroup": "ffdhe6144" 00:11:42.259 } 00:11:42.259 } 00:11:42.259 ]' 00:11:42.259 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:42.259 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:42.259 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:42.518 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:42.518 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:42.518 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:42.518 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:42.518 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:42.778 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY5NDg2NTQzZjY1M2NhMGY0MTI4N2RmYTVlOGFiYmNRMQ8f: --dhchap-ctrl-secret DHHC-1:02:ZDBiNGU1YWY4ZDJlOWJhZTBmNDk5MjdkMDYzY2RjZWE1NTNhOGZiODBhYjBkN2NhrXUOSg==: 00:11:42.778 12:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:01:YTY5NDg2NTQzZjY1M2NhMGY0MTI4N2RmYTVlOGFiYmNRMQ8f: --dhchap-ctrl-secret DHHC-1:02:ZDBiNGU1YWY4ZDJlOWJhZTBmNDk5MjdkMDYzY2RjZWE1NTNhOGZiODBhYjBkN2NhrXUOSg==: 00:11:43.347 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:43.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:43.347 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:11:43.347 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.347 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.347 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.347 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:43.347 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:43.347 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:43.606 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:11:43.606 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:43.606 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:43.606 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:43.606 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:43.606 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:43.606 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:43.606 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.606 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.606 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.606 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:43.606 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:43.606 12:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:44.174 00:11:44.174 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:44.174 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:44.174 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:44.433 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:44.433 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:44.433 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.433 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.433 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.433 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:44.433 { 00:11:44.433 "cntlid": 37, 00:11:44.433 "qid": 0, 00:11:44.433 "state": "enabled", 00:11:44.433 "thread": "nvmf_tgt_poll_group_000", 00:11:44.433 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:11:44.433 "listen_address": { 00:11:44.433 "trtype": "TCP", 00:11:44.433 "adrfam": "IPv4", 00:11:44.433 "traddr": "10.0.0.3", 00:11:44.433 "trsvcid": "4420" 00:11:44.433 }, 00:11:44.433 "peer_address": { 00:11:44.433 "trtype": "TCP", 00:11:44.433 "adrfam": "IPv4", 00:11:44.433 "traddr": "10.0.0.1", 00:11:44.433 "trsvcid": "38930" 00:11:44.433 }, 00:11:44.433 "auth": { 00:11:44.433 "state": "completed", 00:11:44.433 "digest": "sha256", 00:11:44.433 "dhgroup": "ffdhe6144" 00:11:44.433 } 00:11:44.433 } 00:11:44.433 ]' 00:11:44.433 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:44.433 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:44.433 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:44.433 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:44.433 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:44.433 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:44.433 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:44.433 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:44.693 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjljZjJhY2JhNzM1ZDY5NmRlN2MzYTg0YTk5YjkyMmNmNTdlOGZmODEzYzlhYWY5CxoIFA==: --dhchap-ctrl-secret DHHC-1:01:ODliMTgxYzlkZWU2MTg3ZmI1MDJmYWQ5ZDA3ZTdkZGKf3wvV: 00:11:44.693 12:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:02:NjljZjJhY2JhNzM1ZDY5NmRlN2MzYTg0YTk5YjkyMmNmNTdlOGZmODEzYzlhYWY5CxoIFA==: --dhchap-ctrl-secret DHHC-1:01:ODliMTgxYzlkZWU2MTg3ZmI1MDJmYWQ5ZDA3ZTdkZGKf3wvV: 00:11:45.631 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:45.631 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:45.631 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:11:45.631 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.631 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.631 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.631 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:45.631 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:45.631 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:45.631 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:11:45.631 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:45.631 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:45.631 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:45.631 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:45.631 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:45.631 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key3 00:11:45.631 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.631 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.631 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.631 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:45.631 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:45.631 12:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:46.200 00:11:46.200 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:46.200 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:46.200 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:46.459 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:46.459 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:46.459 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.459 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.459 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.459 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:46.459 { 00:11:46.459 "cntlid": 39, 00:11:46.459 "qid": 0, 00:11:46.459 "state": "enabled", 00:11:46.459 "thread": "nvmf_tgt_poll_group_000", 00:11:46.459 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:11:46.459 "listen_address": { 00:11:46.459 "trtype": "TCP", 00:11:46.459 "adrfam": "IPv4", 00:11:46.459 "traddr": "10.0.0.3", 00:11:46.459 "trsvcid": "4420" 00:11:46.459 }, 00:11:46.459 "peer_address": { 00:11:46.459 "trtype": "TCP", 00:11:46.459 "adrfam": "IPv4", 00:11:46.459 "traddr": "10.0.0.1", 00:11:46.459 "trsvcid": "39914" 00:11:46.459 }, 00:11:46.459 "auth": { 00:11:46.459 "state": "completed", 00:11:46.459 "digest": "sha256", 00:11:46.459 "dhgroup": "ffdhe6144" 00:11:46.459 } 00:11:46.459 } 00:11:46.459 ]' 00:11:46.459 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:46.459 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:46.459 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:46.459 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:46.459 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:46.718 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:46.718 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:46.718 12:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:46.977 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDFjMDM0MzY3ZWNiN2NiMzZkNWNhMzZiMmI4NDA0YmI0ZGEwNmVkNmUxMmM3MzEwOGQ0N2NlODZhMmIwOTA0YodrmJs=: 00:11:46.977 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:03:ZDFjMDM0MzY3ZWNiN2NiMzZkNWNhMzZiMmI4NDA0YmI0ZGEwNmVkNmUxMmM3MzEwOGQ0N2NlODZhMmIwOTA0YodrmJs=: 00:11:47.544 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:47.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:47.544 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:11:47.544 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.544 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.544 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.544 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:47.544 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:47.544 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:47.544 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:47.803 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:11:47.803 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:47.803 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:47.803 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:47.803 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:47.803 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:47.803 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:47.803 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.803 12:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.803 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.803 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:47.803 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:47.803 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:48.371 00:11:48.630 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:48.630 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:48.630 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:48.887 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:48.887 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:48.887 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.887 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.887 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.887 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:48.887 { 00:11:48.887 "cntlid": 41, 00:11:48.887 "qid": 0, 00:11:48.887 "state": "enabled", 00:11:48.887 "thread": "nvmf_tgt_poll_group_000", 00:11:48.887 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:11:48.887 "listen_address": { 00:11:48.887 "trtype": "TCP", 00:11:48.887 "adrfam": "IPv4", 00:11:48.887 "traddr": "10.0.0.3", 00:11:48.887 "trsvcid": "4420" 00:11:48.887 }, 00:11:48.887 "peer_address": { 00:11:48.887 "trtype": "TCP", 00:11:48.887 "adrfam": "IPv4", 00:11:48.887 "traddr": "10.0.0.1", 00:11:48.887 "trsvcid": "39932" 00:11:48.887 }, 00:11:48.887 "auth": { 00:11:48.887 "state": "completed", 00:11:48.887 "digest": "sha256", 00:11:48.887 "dhgroup": "ffdhe8192" 00:11:48.887 } 00:11:48.887 } 00:11:48.887 ]' 00:11:48.887 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:48.887 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:48.888 12:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:48.888 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:48.888 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:48.888 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:48.888 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:48.888 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:49.146 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWY1ZWNiNmFmMjJjOWUzYzY2MTdmZDQwZDZjYzRhMGRjYTJjY2JkZDZhY2RmNDcxkQB06A==: --dhchap-ctrl-secret DHHC-1:03:ODBhZmIzNDg0ZDRkOTRhZDExMjBlMjA1M2NiNWMxZjA0YzM5ZDVjYjUyNTg2ZTJjY2EzZDRhNjNlYjAyYWQ4Y+Ou8xQ=: 00:11:49.146 12:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:00:YWY1ZWNiNmFmMjJjOWUzYzY2MTdmZDQwZDZjYzRhMGRjYTJjY2JkZDZhY2RmNDcxkQB06A==: --dhchap-ctrl-secret DHHC-1:03:ODBhZmIzNDg0ZDRkOTRhZDExMjBlMjA1M2NiNWMxZjA0YzM5ZDVjYjUyNTg2ZTJjY2EzZDRhNjNlYjAyYWQ4Y+Ou8xQ=: 00:11:50.081 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:50.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:50.081 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:11:50.081 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.081 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.081 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.081 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:50.081 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:50.081 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:50.339 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:11:50.339 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:50.339 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:50.339 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:50.339 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:50.339 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:50.340 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:50.340 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.340 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.340 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.340 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:50.340 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:50.340 12:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:50.906 00:11:50.906 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:50.906 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:50.906 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:51.164 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:51.164 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:51.164 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.164 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.164 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.164 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:51.164 { 00:11:51.164 "cntlid": 43, 00:11:51.164 "qid": 0, 00:11:51.164 "state": "enabled", 00:11:51.164 "thread": "nvmf_tgt_poll_group_000", 00:11:51.164 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:11:51.164 "listen_address": { 00:11:51.164 "trtype": "TCP", 00:11:51.164 "adrfam": "IPv4", 00:11:51.164 "traddr": "10.0.0.3", 00:11:51.164 "trsvcid": "4420" 00:11:51.164 }, 00:11:51.164 "peer_address": { 00:11:51.164 "trtype": "TCP", 00:11:51.164 "adrfam": "IPv4", 00:11:51.164 "traddr": "10.0.0.1", 00:11:51.164 "trsvcid": "39974" 00:11:51.164 }, 00:11:51.164 "auth": { 00:11:51.164 "state": "completed", 00:11:51.164 "digest": "sha256", 00:11:51.164 "dhgroup": "ffdhe8192" 00:11:51.164 } 00:11:51.164 } 00:11:51.164 ]' 00:11:51.164 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:51.164 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:51.164 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:51.423 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:51.423 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:51.423 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:51.423 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:51.423 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:51.682 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY5NDg2NTQzZjY1M2NhMGY0MTI4N2RmYTVlOGFiYmNRMQ8f: --dhchap-ctrl-secret DHHC-1:02:ZDBiNGU1YWY4ZDJlOWJhZTBmNDk5MjdkMDYzY2RjZWE1NTNhOGZiODBhYjBkN2NhrXUOSg==: 00:11:51.682 12:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:01:YTY5NDg2NTQzZjY1M2NhMGY0MTI4N2RmYTVlOGFiYmNRMQ8f: --dhchap-ctrl-secret DHHC-1:02:ZDBiNGU1YWY4ZDJlOWJhZTBmNDk5MjdkMDYzY2RjZWE1NTNhOGZiODBhYjBkN2NhrXUOSg==: 00:11:52.251 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:52.251 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:52.251 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:11:52.251 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.251 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.251 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.251 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:52.251 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:52.251 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:52.510 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:11:52.510 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:52.510 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:52.510 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:52.510 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:52.510 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:52.510 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:52.510 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.510 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.510 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.510 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:52.510 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:52.510 12:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:53.078 00:11:53.078 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:53.078 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:53.078 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:53.337 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:53.337 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:53.337 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.337 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.337 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.337 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:53.337 { 00:11:53.337 "cntlid": 45, 00:11:53.337 "qid": 0, 00:11:53.337 "state": "enabled", 00:11:53.337 "thread": "nvmf_tgt_poll_group_000", 00:11:53.337 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:11:53.337 "listen_address": { 00:11:53.337 "trtype": "TCP", 00:11:53.337 "adrfam": "IPv4", 00:11:53.337 "traddr": "10.0.0.3", 00:11:53.337 "trsvcid": "4420" 00:11:53.337 }, 00:11:53.337 "peer_address": { 00:11:53.337 "trtype": "TCP", 00:11:53.337 "adrfam": "IPv4", 00:11:53.337 "traddr": "10.0.0.1", 00:11:53.337 "trsvcid": "39988" 00:11:53.337 }, 00:11:53.337 "auth": { 00:11:53.337 "state": "completed", 00:11:53.337 "digest": "sha256", 00:11:53.337 "dhgroup": "ffdhe8192" 00:11:53.337 } 00:11:53.337 } 00:11:53.337 ]' 00:11:53.337 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:53.337 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:53.337 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:53.597 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:53.597 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:53.597 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:53.597 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:53.597 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:53.856 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjljZjJhY2JhNzM1ZDY5NmRlN2MzYTg0YTk5YjkyMmNmNTdlOGZmODEzYzlhYWY5CxoIFA==: --dhchap-ctrl-secret DHHC-1:01:ODliMTgxYzlkZWU2MTg3ZmI1MDJmYWQ5ZDA3ZTdkZGKf3wvV: 00:11:53.856 12:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:02:NjljZjJhY2JhNzM1ZDY5NmRlN2MzYTg0YTk5YjkyMmNmNTdlOGZmODEzYzlhYWY5CxoIFA==: --dhchap-ctrl-secret DHHC-1:01:ODliMTgxYzlkZWU2MTg3ZmI1MDJmYWQ5ZDA3ZTdkZGKf3wvV: 00:11:54.424 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.424 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:11:54.424 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.424 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.683 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.683 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:54.683 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:54.683 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:54.683 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:11:54.683 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:54.683 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:54.683 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:54.683 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:54.683 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:54.683 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key3 00:11:54.683 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.683 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.941 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.941 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:54.941 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:54.941 12:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:55.510 00:11:55.510 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:55.510 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:55.510 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:55.768 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:55.768 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:55.768 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.768 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.768 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.768 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:55.768 { 00:11:55.768 "cntlid": 47, 00:11:55.768 "qid": 0, 00:11:55.768 "state": "enabled", 00:11:55.768 "thread": "nvmf_tgt_poll_group_000", 00:11:55.768 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:11:55.768 "listen_address": { 00:11:55.768 "trtype": "TCP", 00:11:55.768 "adrfam": "IPv4", 00:11:55.768 "traddr": "10.0.0.3", 00:11:55.768 "trsvcid": "4420" 00:11:55.768 }, 00:11:55.768 "peer_address": { 00:11:55.768 "trtype": "TCP", 00:11:55.768 "adrfam": "IPv4", 00:11:55.768 "traddr": "10.0.0.1", 00:11:55.768 "trsvcid": "34290" 00:11:55.768 }, 00:11:55.768 "auth": { 00:11:55.768 "state": "completed", 00:11:55.768 "digest": "sha256", 00:11:55.768 "dhgroup": "ffdhe8192" 00:11:55.768 } 00:11:55.768 } 00:11:55.768 ]' 00:11:55.768 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:55.768 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:55.768 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:55.768 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:55.768 12:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:55.768 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.768 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.768 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:56.026 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDFjMDM0MzY3ZWNiN2NiMzZkNWNhMzZiMmI4NDA0YmI0ZGEwNmVkNmUxMmM3MzEwOGQ0N2NlODZhMmIwOTA0YodrmJs=: 00:11:56.026 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:03:ZDFjMDM0MzY3ZWNiN2NiMzZkNWNhMzZiMmI4NDA0YmI0ZGEwNmVkNmUxMmM3MzEwOGQ0N2NlODZhMmIwOTA0YodrmJs=: 00:11:56.963 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:56.963 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:56.963 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:11:56.963 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.963 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.963 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.963 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:56.963 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:56.963 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:56.963 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:56.963 12:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:56.963 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:11:56.963 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:56.963 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:56.963 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:56.963 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:56.963 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:56.963 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:56.963 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.963 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.963 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.963 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:56.963 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:56.963 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:57.531 00:11:57.531 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:57.531 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:57.531 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:57.831 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:57.831 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:57.831 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.831 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.831 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.832 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:57.832 { 00:11:57.832 "cntlid": 49, 00:11:57.832 "qid": 0, 00:11:57.832 "state": "enabled", 00:11:57.832 "thread": "nvmf_tgt_poll_group_000", 00:11:57.832 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:11:57.832 "listen_address": { 00:11:57.832 "trtype": "TCP", 00:11:57.832 "adrfam": "IPv4", 00:11:57.832 "traddr": "10.0.0.3", 00:11:57.832 "trsvcid": "4420" 00:11:57.832 }, 00:11:57.832 "peer_address": { 00:11:57.832 "trtype": "TCP", 00:11:57.832 "adrfam": "IPv4", 00:11:57.832 "traddr": "10.0.0.1", 00:11:57.832 "trsvcid": "34316" 00:11:57.832 }, 00:11:57.832 "auth": { 00:11:57.832 "state": "completed", 00:11:57.832 "digest": "sha384", 00:11:57.832 "dhgroup": "null" 00:11:57.832 } 00:11:57.832 } 00:11:57.832 ]' 00:11:57.832 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:57.832 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:57.832 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:57.832 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:57.832 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:57.832 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:57.832 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:57.832 12:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:58.111 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWY1ZWNiNmFmMjJjOWUzYzY2MTdmZDQwZDZjYzRhMGRjYTJjY2JkZDZhY2RmNDcxkQB06A==: --dhchap-ctrl-secret DHHC-1:03:ODBhZmIzNDg0ZDRkOTRhZDExMjBlMjA1M2NiNWMxZjA0YzM5ZDVjYjUyNTg2ZTJjY2EzZDRhNjNlYjAyYWQ4Y+Ou8xQ=: 00:11:58.111 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:00:YWY1ZWNiNmFmMjJjOWUzYzY2MTdmZDQwZDZjYzRhMGRjYTJjY2JkZDZhY2RmNDcxkQB06A==: --dhchap-ctrl-secret DHHC-1:03:ODBhZmIzNDg0ZDRkOTRhZDExMjBlMjA1M2NiNWMxZjA0YzM5ZDVjYjUyNTg2ZTJjY2EzZDRhNjNlYjAyYWQ4Y+Ou8xQ=: 00:11:59.048 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:59.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:59.048 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:11:59.048 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.048 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.048 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.048 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:59.048 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:59.048 12:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:59.048 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:11:59.048 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:59.048 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:59.048 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:59.048 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:59.048 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:59.048 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:59.048 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.048 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.048 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.048 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:59.048 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:59.048 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:59.615 00:11:59.615 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:59.615 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:59.615 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.874 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:59.874 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:59.874 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.874 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.874 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.874 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:59.874 { 00:11:59.874 "cntlid": 51, 00:11:59.874 "qid": 0, 00:11:59.874 "state": "enabled", 00:11:59.874 "thread": "nvmf_tgt_poll_group_000", 00:11:59.874 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:11:59.874 "listen_address": { 00:11:59.874 "trtype": "TCP", 00:11:59.874 "adrfam": "IPv4", 00:11:59.874 "traddr": "10.0.0.3", 00:11:59.874 "trsvcid": "4420" 00:11:59.874 }, 00:11:59.874 "peer_address": { 00:11:59.874 "trtype": "TCP", 00:11:59.874 "adrfam": "IPv4", 00:11:59.874 "traddr": "10.0.0.1", 00:11:59.874 "trsvcid": "34342" 00:11:59.874 }, 00:11:59.874 "auth": { 00:11:59.874 "state": "completed", 00:11:59.874 "digest": "sha384", 00:11:59.874 "dhgroup": "null" 00:11:59.874 } 00:11:59.874 } 00:11:59.874 ]' 00:11:59.874 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:59.874 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:59.874 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:59.874 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:59.874 12:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:59.874 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:59.874 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:59.874 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:00.131 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY5NDg2NTQzZjY1M2NhMGY0MTI4N2RmYTVlOGFiYmNRMQ8f: --dhchap-ctrl-secret DHHC-1:02:ZDBiNGU1YWY4ZDJlOWJhZTBmNDk5MjdkMDYzY2RjZWE1NTNhOGZiODBhYjBkN2NhrXUOSg==: 00:12:00.131 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:01:YTY5NDg2NTQzZjY1M2NhMGY0MTI4N2RmYTVlOGFiYmNRMQ8f: --dhchap-ctrl-secret DHHC-1:02:ZDBiNGU1YWY4ZDJlOWJhZTBmNDk5MjdkMDYzY2RjZWE1NTNhOGZiODBhYjBkN2NhrXUOSg==: 00:12:00.698 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:00.698 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:00.698 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:12:00.698 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.698 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.957 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.957 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:00.957 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:00.957 12:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:00.957 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:12:00.957 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:00.957 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:00.957 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:00.957 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:00.957 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:00.957 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:00.957 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.957 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.957 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.957 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:00.957 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:00.957 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:01.525 00:12:01.525 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:01.525 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:01.525 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:01.783 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:01.783 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:01.783 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.783 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.783 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.783 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:01.783 { 00:12:01.783 "cntlid": 53, 00:12:01.783 "qid": 0, 00:12:01.783 "state": "enabled", 00:12:01.783 "thread": "nvmf_tgt_poll_group_000", 00:12:01.783 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:12:01.783 "listen_address": { 00:12:01.783 "trtype": "TCP", 00:12:01.783 "adrfam": "IPv4", 00:12:01.783 "traddr": "10.0.0.3", 00:12:01.783 "trsvcid": "4420" 00:12:01.783 }, 00:12:01.783 "peer_address": { 00:12:01.783 "trtype": "TCP", 00:12:01.783 "adrfam": "IPv4", 00:12:01.783 "traddr": "10.0.0.1", 00:12:01.783 "trsvcid": "34380" 00:12:01.783 }, 00:12:01.783 "auth": { 00:12:01.783 "state": "completed", 00:12:01.783 "digest": "sha384", 00:12:01.783 "dhgroup": "null" 00:12:01.783 } 00:12:01.783 } 00:12:01.783 ]' 00:12:01.783 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:01.783 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:01.783 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:01.783 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:01.783 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:01.783 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:01.784 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:01.784 12:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:02.042 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjljZjJhY2JhNzM1ZDY5NmRlN2MzYTg0YTk5YjkyMmNmNTdlOGZmODEzYzlhYWY5CxoIFA==: --dhchap-ctrl-secret DHHC-1:01:ODliMTgxYzlkZWU2MTg3ZmI1MDJmYWQ5ZDA3ZTdkZGKf3wvV: 00:12:02.042 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:02:NjljZjJhY2JhNzM1ZDY5NmRlN2MzYTg0YTk5YjkyMmNmNTdlOGZmODEzYzlhYWY5CxoIFA==: --dhchap-ctrl-secret DHHC-1:01:ODliMTgxYzlkZWU2MTg3ZmI1MDJmYWQ5ZDA3ZTdkZGKf3wvV: 00:12:02.979 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:02.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:02.979 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:12:02.979 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.979 12:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.979 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.979 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:02.979 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:02.979 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:03.238 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:12:03.238 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:03.238 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:03.238 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:03.238 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:03.238 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:03.238 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key3 00:12:03.238 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.238 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.238 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.238 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:03.238 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:03.238 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:03.496 00:12:03.496 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:03.496 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:03.496 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:03.755 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:03.755 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:03.755 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.755 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.755 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.755 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:03.755 { 00:12:03.755 "cntlid": 55, 00:12:03.755 "qid": 0, 00:12:03.755 "state": "enabled", 00:12:03.755 "thread": "nvmf_tgt_poll_group_000", 00:12:03.755 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:12:03.755 "listen_address": { 00:12:03.755 "trtype": "TCP", 00:12:03.755 "adrfam": "IPv4", 00:12:03.755 "traddr": "10.0.0.3", 00:12:03.755 "trsvcid": "4420" 00:12:03.755 }, 00:12:03.755 "peer_address": { 00:12:03.755 "trtype": "TCP", 00:12:03.755 "adrfam": "IPv4", 00:12:03.755 "traddr": "10.0.0.1", 00:12:03.755 "trsvcid": "34420" 00:12:03.755 }, 00:12:03.755 "auth": { 00:12:03.755 "state": "completed", 00:12:03.755 "digest": "sha384", 00:12:03.755 "dhgroup": "null" 00:12:03.755 } 00:12:03.755 } 00:12:03.755 ]' 00:12:03.755 12:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:04.014 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:04.014 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:04.014 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:04.014 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:04.014 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:04.014 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:04.014 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:04.273 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDFjMDM0MzY3ZWNiN2NiMzZkNWNhMzZiMmI4NDA0YmI0ZGEwNmVkNmUxMmM3MzEwOGQ0N2NlODZhMmIwOTA0YodrmJs=: 00:12:04.273 12:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:03:ZDFjMDM0MzY3ZWNiN2NiMzZkNWNhMzZiMmI4NDA0YmI0ZGEwNmVkNmUxMmM3MzEwOGQ0N2NlODZhMmIwOTA0YodrmJs=: 00:12:05.208 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:05.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:05.208 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:12:05.208 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.208 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.208 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.208 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:05.209 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:05.209 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:05.209 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:05.209 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:12:05.209 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:05.209 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:05.209 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:05.209 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:05.209 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:05.209 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:05.209 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.209 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.209 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.209 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:05.209 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:05.209 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:05.776 00:12:05.776 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:05.776 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:05.776 12:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:06.036 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:06.036 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:06.036 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.036 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.036 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.036 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:06.036 { 00:12:06.036 "cntlid": 57, 00:12:06.036 "qid": 0, 00:12:06.036 "state": "enabled", 00:12:06.036 "thread": "nvmf_tgt_poll_group_000", 00:12:06.036 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:12:06.036 "listen_address": { 00:12:06.036 "trtype": "TCP", 00:12:06.036 "adrfam": "IPv4", 00:12:06.036 "traddr": "10.0.0.3", 00:12:06.036 "trsvcid": "4420" 00:12:06.036 }, 00:12:06.036 "peer_address": { 00:12:06.036 "trtype": "TCP", 00:12:06.036 "adrfam": "IPv4", 00:12:06.036 "traddr": "10.0.0.1", 00:12:06.036 "trsvcid": "37628" 00:12:06.036 }, 00:12:06.036 "auth": { 00:12:06.036 "state": "completed", 00:12:06.036 "digest": "sha384", 00:12:06.036 "dhgroup": "ffdhe2048" 00:12:06.036 } 00:12:06.036 } 00:12:06.036 ]' 00:12:06.036 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:06.036 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:06.036 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:06.036 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:06.036 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:06.036 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:06.036 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:06.036 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:06.604 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWY1ZWNiNmFmMjJjOWUzYzY2MTdmZDQwZDZjYzRhMGRjYTJjY2JkZDZhY2RmNDcxkQB06A==: --dhchap-ctrl-secret DHHC-1:03:ODBhZmIzNDg0ZDRkOTRhZDExMjBlMjA1M2NiNWMxZjA0YzM5ZDVjYjUyNTg2ZTJjY2EzZDRhNjNlYjAyYWQ4Y+Ou8xQ=: 00:12:06.604 12:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:00:YWY1ZWNiNmFmMjJjOWUzYzY2MTdmZDQwZDZjYzRhMGRjYTJjY2JkZDZhY2RmNDcxkQB06A==: --dhchap-ctrl-secret DHHC-1:03:ODBhZmIzNDg0ZDRkOTRhZDExMjBlMjA1M2NiNWMxZjA0YzM5ZDVjYjUyNTg2ZTJjY2EzZDRhNjNlYjAyYWQ4Y+Ou8xQ=: 00:12:07.173 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:07.173 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:07.173 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:12:07.173 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.173 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.173 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.173 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:07.173 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:07.173 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:07.432 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:12:07.432 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:07.432 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:07.432 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:07.432 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:07.432 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:07.432 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:07.432 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.432 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.432 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.432 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:07.432 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:07.432 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:07.691 00:12:07.691 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:07.691 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:07.691 12:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:07.949 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:07.949 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:07.949 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.949 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.949 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.949 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:07.949 { 00:12:07.949 "cntlid": 59, 00:12:07.949 "qid": 0, 00:12:07.949 "state": "enabled", 00:12:07.949 "thread": "nvmf_tgt_poll_group_000", 00:12:07.949 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:12:07.949 "listen_address": { 00:12:07.949 "trtype": "TCP", 00:12:07.949 "adrfam": "IPv4", 00:12:07.949 "traddr": "10.0.0.3", 00:12:07.949 "trsvcid": "4420" 00:12:07.949 }, 00:12:07.949 "peer_address": { 00:12:07.949 "trtype": "TCP", 00:12:07.949 "adrfam": "IPv4", 00:12:07.949 "traddr": "10.0.0.1", 00:12:07.949 "trsvcid": "37656" 00:12:07.949 }, 00:12:07.949 "auth": { 00:12:07.949 "state": "completed", 00:12:07.949 "digest": "sha384", 00:12:07.949 "dhgroup": "ffdhe2048" 00:12:07.949 } 00:12:07.949 } 00:12:07.949 ]' 00:12:07.949 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:08.207 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:08.207 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:08.207 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:08.207 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:08.207 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:08.207 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:08.207 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:08.466 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY5NDg2NTQzZjY1M2NhMGY0MTI4N2RmYTVlOGFiYmNRMQ8f: --dhchap-ctrl-secret DHHC-1:02:ZDBiNGU1YWY4ZDJlOWJhZTBmNDk5MjdkMDYzY2RjZWE1NTNhOGZiODBhYjBkN2NhrXUOSg==: 00:12:08.466 12:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:01:YTY5NDg2NTQzZjY1M2NhMGY0MTI4N2RmYTVlOGFiYmNRMQ8f: --dhchap-ctrl-secret DHHC-1:02:ZDBiNGU1YWY4ZDJlOWJhZTBmNDk5MjdkMDYzY2RjZWE1NTNhOGZiODBhYjBkN2NhrXUOSg==: 00:12:09.034 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:09.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:09.034 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:12:09.034 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.034 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.034 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.034 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:09.034 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:09.034 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:09.601 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:12:09.601 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:09.601 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:09.601 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:09.601 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:09.601 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:09.601 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:09.601 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.601 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.601 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.601 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:09.601 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:09.601 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:09.859 00:12:09.859 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:09.859 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:09.859 12:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:10.118 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.118 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.118 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.118 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.118 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.118 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:10.118 { 00:12:10.118 "cntlid": 61, 00:12:10.118 "qid": 0, 00:12:10.118 "state": "enabled", 00:12:10.118 "thread": "nvmf_tgt_poll_group_000", 00:12:10.118 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:12:10.118 "listen_address": { 00:12:10.118 "trtype": "TCP", 00:12:10.118 "adrfam": "IPv4", 00:12:10.118 "traddr": "10.0.0.3", 00:12:10.118 "trsvcid": "4420" 00:12:10.118 }, 00:12:10.118 "peer_address": { 00:12:10.118 "trtype": "TCP", 00:12:10.118 "adrfam": "IPv4", 00:12:10.118 "traddr": "10.0.0.1", 00:12:10.118 "trsvcid": "37682" 00:12:10.118 }, 00:12:10.118 "auth": { 00:12:10.118 "state": "completed", 00:12:10.118 "digest": "sha384", 00:12:10.118 "dhgroup": "ffdhe2048" 00:12:10.118 } 00:12:10.118 } 00:12:10.118 ]' 00:12:10.118 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:10.118 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:10.118 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:10.118 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:10.118 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:10.377 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:10.377 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:10.377 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:10.636 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjljZjJhY2JhNzM1ZDY5NmRlN2MzYTg0YTk5YjkyMmNmNTdlOGZmODEzYzlhYWY5CxoIFA==: --dhchap-ctrl-secret DHHC-1:01:ODliMTgxYzlkZWU2MTg3ZmI1MDJmYWQ5ZDA3ZTdkZGKf3wvV: 00:12:10.636 12:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:02:NjljZjJhY2JhNzM1ZDY5NmRlN2MzYTg0YTk5YjkyMmNmNTdlOGZmODEzYzlhYWY5CxoIFA==: --dhchap-ctrl-secret DHHC-1:01:ODliMTgxYzlkZWU2MTg3ZmI1MDJmYWQ5ZDA3ZTdkZGKf3wvV: 00:12:11.204 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:11.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.204 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:12:11.204 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.204 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.204 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.204 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:11.204 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:11.204 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:11.462 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:12:11.462 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:11.462 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:11.462 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:11.462 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:11.462 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:11.462 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key3 00:12:11.462 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.462 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.462 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.462 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:11.462 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:11.462 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:11.721 00:12:11.979 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:11.979 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:11.979 12:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:12.241 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:12.241 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:12.241 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.241 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.241 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.241 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:12.241 { 00:12:12.241 "cntlid": 63, 00:12:12.241 "qid": 0, 00:12:12.241 "state": "enabled", 00:12:12.241 "thread": "nvmf_tgt_poll_group_000", 00:12:12.241 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:12:12.241 "listen_address": { 00:12:12.241 "trtype": "TCP", 00:12:12.241 "adrfam": "IPv4", 00:12:12.241 "traddr": "10.0.0.3", 00:12:12.241 "trsvcid": "4420" 00:12:12.241 }, 00:12:12.241 "peer_address": { 00:12:12.241 "trtype": "TCP", 00:12:12.241 "adrfam": "IPv4", 00:12:12.241 "traddr": "10.0.0.1", 00:12:12.241 "trsvcid": "37696" 00:12:12.241 }, 00:12:12.241 "auth": { 00:12:12.241 "state": "completed", 00:12:12.241 "digest": "sha384", 00:12:12.241 "dhgroup": "ffdhe2048" 00:12:12.241 } 00:12:12.241 } 00:12:12.241 ]' 00:12:12.241 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:12.241 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:12.241 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:12.241 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:12.241 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:12.241 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:12.241 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:12.241 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:12.504 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDFjMDM0MzY3ZWNiN2NiMzZkNWNhMzZiMmI4NDA0YmI0ZGEwNmVkNmUxMmM3MzEwOGQ0N2NlODZhMmIwOTA0YodrmJs=: 00:12:12.504 12:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:03:ZDFjMDM0MzY3ZWNiN2NiMzZkNWNhMzZiMmI4NDA0YmI0ZGEwNmVkNmUxMmM3MzEwOGQ0N2NlODZhMmIwOTA0YodrmJs=: 00:12:13.071 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:13.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:13.071 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:12:13.071 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.071 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.071 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.071 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:13.071 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:13.071 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:13.071 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:13.639 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:12:13.639 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:13.639 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:13.639 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:13.639 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:13.639 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:13.639 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:13.639 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.639 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.639 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.639 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:13.639 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:13.639 12:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:13.898 00:12:13.898 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:13.898 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:13.898 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:14.162 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:14.162 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:14.162 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.162 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.162 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.162 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:14.162 { 00:12:14.162 "cntlid": 65, 00:12:14.162 "qid": 0, 00:12:14.162 "state": "enabled", 00:12:14.162 "thread": "nvmf_tgt_poll_group_000", 00:12:14.162 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:12:14.162 "listen_address": { 00:12:14.162 "trtype": "TCP", 00:12:14.162 "adrfam": "IPv4", 00:12:14.162 "traddr": "10.0.0.3", 00:12:14.162 "trsvcid": "4420" 00:12:14.162 }, 00:12:14.162 "peer_address": { 00:12:14.162 "trtype": "TCP", 00:12:14.162 "adrfam": "IPv4", 00:12:14.162 "traddr": "10.0.0.1", 00:12:14.162 "trsvcid": "37728" 00:12:14.162 }, 00:12:14.162 "auth": { 00:12:14.162 "state": "completed", 00:12:14.162 "digest": "sha384", 00:12:14.162 "dhgroup": "ffdhe3072" 00:12:14.162 } 00:12:14.162 } 00:12:14.162 ]' 00:12:14.162 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:14.162 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:14.162 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:14.421 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:14.421 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:14.421 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.421 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.421 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:14.679 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWY1ZWNiNmFmMjJjOWUzYzY2MTdmZDQwZDZjYzRhMGRjYTJjY2JkZDZhY2RmNDcxkQB06A==: --dhchap-ctrl-secret DHHC-1:03:ODBhZmIzNDg0ZDRkOTRhZDExMjBlMjA1M2NiNWMxZjA0YzM5ZDVjYjUyNTg2ZTJjY2EzZDRhNjNlYjAyYWQ4Y+Ou8xQ=: 00:12:14.679 12:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:00:YWY1ZWNiNmFmMjJjOWUzYzY2MTdmZDQwZDZjYzRhMGRjYTJjY2JkZDZhY2RmNDcxkQB06A==: --dhchap-ctrl-secret DHHC-1:03:ODBhZmIzNDg0ZDRkOTRhZDExMjBlMjA1M2NiNWMxZjA0YzM5ZDVjYjUyNTg2ZTJjY2EzZDRhNjNlYjAyYWQ4Y+Ou8xQ=: 00:12:15.247 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:15.247 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:15.247 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:12:15.247 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.247 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.247 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.247 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:15.247 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:15.247 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:15.815 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:12:15.815 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:15.815 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:15.815 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:15.815 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:15.815 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:15.815 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:15.815 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.815 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.815 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.815 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:15.815 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:15.815 12:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:16.074 00:12:16.074 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:16.074 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:16.074 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:16.334 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:16.334 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:16.334 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.334 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.334 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.334 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:16.334 { 00:12:16.334 "cntlid": 67, 00:12:16.334 "qid": 0, 00:12:16.334 "state": "enabled", 00:12:16.334 "thread": "nvmf_tgt_poll_group_000", 00:12:16.334 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:12:16.334 "listen_address": { 00:12:16.334 "trtype": "TCP", 00:12:16.334 "adrfam": "IPv4", 00:12:16.334 "traddr": "10.0.0.3", 00:12:16.334 "trsvcid": "4420" 00:12:16.334 }, 00:12:16.334 "peer_address": { 00:12:16.334 "trtype": "TCP", 00:12:16.334 "adrfam": "IPv4", 00:12:16.334 "traddr": "10.0.0.1", 00:12:16.334 "trsvcid": "56796" 00:12:16.334 }, 00:12:16.334 "auth": { 00:12:16.334 "state": "completed", 00:12:16.334 "digest": "sha384", 00:12:16.334 "dhgroup": "ffdhe3072" 00:12:16.334 } 00:12:16.334 } 00:12:16.334 ]' 00:12:16.334 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:16.334 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:16.334 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:16.334 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:16.334 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:16.594 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:16.594 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:16.594 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:16.853 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY5NDg2NTQzZjY1M2NhMGY0MTI4N2RmYTVlOGFiYmNRMQ8f: --dhchap-ctrl-secret DHHC-1:02:ZDBiNGU1YWY4ZDJlOWJhZTBmNDk5MjdkMDYzY2RjZWE1NTNhOGZiODBhYjBkN2NhrXUOSg==: 00:12:16.853 12:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:01:YTY5NDg2NTQzZjY1M2NhMGY0MTI4N2RmYTVlOGFiYmNRMQ8f: --dhchap-ctrl-secret DHHC-1:02:ZDBiNGU1YWY4ZDJlOWJhZTBmNDk5MjdkMDYzY2RjZWE1NTNhOGZiODBhYjBkN2NhrXUOSg==: 00:12:17.422 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:17.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:17.422 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:12:17.422 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.422 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.422 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.422 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:17.422 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:17.422 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:17.991 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:12:17.991 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:17.991 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:17.991 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:17.991 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:17.991 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:17.991 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:17.991 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.991 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.991 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.991 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:17.991 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:17.991 12:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:18.251 00:12:18.251 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:18.251 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:18.251 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:18.511 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:18.511 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:18.511 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.511 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.511 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.511 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:18.511 { 00:12:18.511 "cntlid": 69, 00:12:18.511 "qid": 0, 00:12:18.511 "state": "enabled", 00:12:18.511 "thread": "nvmf_tgt_poll_group_000", 00:12:18.511 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:12:18.511 "listen_address": { 00:12:18.511 "trtype": "TCP", 00:12:18.511 "adrfam": "IPv4", 00:12:18.511 "traddr": "10.0.0.3", 00:12:18.511 "trsvcid": "4420" 00:12:18.511 }, 00:12:18.511 "peer_address": { 00:12:18.511 "trtype": "TCP", 00:12:18.511 "adrfam": "IPv4", 00:12:18.511 "traddr": "10.0.0.1", 00:12:18.511 "trsvcid": "56820" 00:12:18.511 }, 00:12:18.511 "auth": { 00:12:18.511 "state": "completed", 00:12:18.511 "digest": "sha384", 00:12:18.511 "dhgroup": "ffdhe3072" 00:12:18.511 } 00:12:18.511 } 00:12:18.511 ]' 00:12:18.511 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:18.511 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:18.511 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:18.511 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:18.511 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:18.771 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:18.771 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:18.771 12:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:19.030 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjljZjJhY2JhNzM1ZDY5NmRlN2MzYTg0YTk5YjkyMmNmNTdlOGZmODEzYzlhYWY5CxoIFA==: --dhchap-ctrl-secret DHHC-1:01:ODliMTgxYzlkZWU2MTg3ZmI1MDJmYWQ5ZDA3ZTdkZGKf3wvV: 00:12:19.030 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:02:NjljZjJhY2JhNzM1ZDY5NmRlN2MzYTg0YTk5YjkyMmNmNTdlOGZmODEzYzlhYWY5CxoIFA==: --dhchap-ctrl-secret DHHC-1:01:ODliMTgxYzlkZWU2MTg3ZmI1MDJmYWQ5ZDA3ZTdkZGKf3wvV: 00:12:19.599 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:19.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:19.599 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:12:19.599 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.599 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.599 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.599 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:19.599 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:19.599 12:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:19.858 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:12:19.858 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:19.858 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:19.858 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:19.858 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:19.858 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:19.858 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key3 00:12:19.858 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.858 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.858 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.858 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:19.858 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:19.858 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:20.118 00:12:20.118 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:20.118 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:20.118 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:20.377 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:20.377 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:20.377 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.377 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.636 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.636 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:20.636 { 00:12:20.636 "cntlid": 71, 00:12:20.636 "qid": 0, 00:12:20.636 "state": "enabled", 00:12:20.636 "thread": "nvmf_tgt_poll_group_000", 00:12:20.636 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:12:20.636 "listen_address": { 00:12:20.636 "trtype": "TCP", 00:12:20.636 "adrfam": "IPv4", 00:12:20.636 "traddr": "10.0.0.3", 00:12:20.636 "trsvcid": "4420" 00:12:20.636 }, 00:12:20.636 "peer_address": { 00:12:20.636 "trtype": "TCP", 00:12:20.636 "adrfam": "IPv4", 00:12:20.636 "traddr": "10.0.0.1", 00:12:20.636 "trsvcid": "56830" 00:12:20.636 }, 00:12:20.636 "auth": { 00:12:20.636 "state": "completed", 00:12:20.636 "digest": "sha384", 00:12:20.636 "dhgroup": "ffdhe3072" 00:12:20.636 } 00:12:20.636 } 00:12:20.636 ]' 00:12:20.636 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:20.636 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:20.636 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:20.636 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:20.636 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:20.636 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:20.636 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:20.636 12:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:20.896 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDFjMDM0MzY3ZWNiN2NiMzZkNWNhMzZiMmI4NDA0YmI0ZGEwNmVkNmUxMmM3MzEwOGQ0N2NlODZhMmIwOTA0YodrmJs=: 00:12:20.896 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:03:ZDFjMDM0MzY3ZWNiN2NiMzZkNWNhMzZiMmI4NDA0YmI0ZGEwNmVkNmUxMmM3MzEwOGQ0N2NlODZhMmIwOTA0YodrmJs=: 00:12:21.464 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:21.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:21.464 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:12:21.464 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.464 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.464 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.464 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:21.464 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:21.464 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:21.464 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:21.723 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:12:21.723 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:21.723 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:21.724 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:21.724 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:21.724 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:21.724 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:21.724 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.724 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.724 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.724 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:21.724 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:21.724 12:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:21.983 00:12:22.242 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:22.242 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:22.242 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:22.502 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:22.502 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:22.502 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.502 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.502 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.502 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:22.502 { 00:12:22.502 "cntlid": 73, 00:12:22.502 "qid": 0, 00:12:22.502 "state": "enabled", 00:12:22.502 "thread": "nvmf_tgt_poll_group_000", 00:12:22.502 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:12:22.502 "listen_address": { 00:12:22.502 "trtype": "TCP", 00:12:22.502 "adrfam": "IPv4", 00:12:22.502 "traddr": "10.0.0.3", 00:12:22.502 "trsvcid": "4420" 00:12:22.502 }, 00:12:22.502 "peer_address": { 00:12:22.502 "trtype": "TCP", 00:12:22.502 "adrfam": "IPv4", 00:12:22.502 "traddr": "10.0.0.1", 00:12:22.502 "trsvcid": "56858" 00:12:22.502 }, 00:12:22.502 "auth": { 00:12:22.502 "state": "completed", 00:12:22.502 "digest": "sha384", 00:12:22.502 "dhgroup": "ffdhe4096" 00:12:22.502 } 00:12:22.502 } 00:12:22.502 ]' 00:12:22.502 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:22.502 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:22.502 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:22.502 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:22.502 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:22.502 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:22.502 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:22.502 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:22.762 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWY1ZWNiNmFmMjJjOWUzYzY2MTdmZDQwZDZjYzRhMGRjYTJjY2JkZDZhY2RmNDcxkQB06A==: --dhchap-ctrl-secret DHHC-1:03:ODBhZmIzNDg0ZDRkOTRhZDExMjBlMjA1M2NiNWMxZjA0YzM5ZDVjYjUyNTg2ZTJjY2EzZDRhNjNlYjAyYWQ4Y+Ou8xQ=: 00:12:22.762 12:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:00:YWY1ZWNiNmFmMjJjOWUzYzY2MTdmZDQwZDZjYzRhMGRjYTJjY2JkZDZhY2RmNDcxkQB06A==: --dhchap-ctrl-secret DHHC-1:03:ODBhZmIzNDg0ZDRkOTRhZDExMjBlMjA1M2NiNWMxZjA0YzM5ZDVjYjUyNTg2ZTJjY2EzZDRhNjNlYjAyYWQ4Y+Ou8xQ=: 00:12:23.330 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:23.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:23.330 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:12:23.330 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.330 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.330 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.330 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:23.330 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:23.330 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:23.899 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:12:23.899 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:23.899 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:23.899 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:23.899 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:23.899 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:23.899 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:23.899 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.899 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.899 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.899 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:23.899 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:23.899 12:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:24.157 00:12:24.157 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:24.157 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:24.157 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:24.415 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:24.415 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:24.415 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.415 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.415 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.415 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:24.415 { 00:12:24.415 "cntlid": 75, 00:12:24.415 "qid": 0, 00:12:24.415 "state": "enabled", 00:12:24.415 "thread": "nvmf_tgt_poll_group_000", 00:12:24.415 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:12:24.415 "listen_address": { 00:12:24.415 "trtype": "TCP", 00:12:24.415 "adrfam": "IPv4", 00:12:24.415 "traddr": "10.0.0.3", 00:12:24.415 "trsvcid": "4420" 00:12:24.415 }, 00:12:24.415 "peer_address": { 00:12:24.415 "trtype": "TCP", 00:12:24.415 "adrfam": "IPv4", 00:12:24.415 "traddr": "10.0.0.1", 00:12:24.415 "trsvcid": "56882" 00:12:24.415 }, 00:12:24.415 "auth": { 00:12:24.415 "state": "completed", 00:12:24.415 "digest": "sha384", 00:12:24.415 "dhgroup": "ffdhe4096" 00:12:24.415 } 00:12:24.415 } 00:12:24.415 ]' 00:12:24.415 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:24.415 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:24.415 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:24.415 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:24.415 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:24.415 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:24.415 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:24.415 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:24.674 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY5NDg2NTQzZjY1M2NhMGY0MTI4N2RmYTVlOGFiYmNRMQ8f: --dhchap-ctrl-secret DHHC-1:02:ZDBiNGU1YWY4ZDJlOWJhZTBmNDk5MjdkMDYzY2RjZWE1NTNhOGZiODBhYjBkN2NhrXUOSg==: 00:12:24.675 12:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:01:YTY5NDg2NTQzZjY1M2NhMGY0MTI4N2RmYTVlOGFiYmNRMQ8f: --dhchap-ctrl-secret DHHC-1:02:ZDBiNGU1YWY4ZDJlOWJhZTBmNDk5MjdkMDYzY2RjZWE1NTNhOGZiODBhYjBkN2NhrXUOSg==: 00:12:25.639 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:25.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:25.639 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:12:25.639 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.639 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.639 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.639 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:25.639 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:25.639 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:25.639 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:12:25.639 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:25.639 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:25.639 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:25.639 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:25.639 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:25.639 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:25.639 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.639 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.899 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.899 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:25.899 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:25.899 12:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:26.158 00:12:26.158 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:26.158 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:26.158 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:26.418 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:26.418 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:26.418 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.418 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.418 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.418 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:26.418 { 00:12:26.418 "cntlid": 77, 00:12:26.418 "qid": 0, 00:12:26.418 "state": "enabled", 00:12:26.418 "thread": "nvmf_tgt_poll_group_000", 00:12:26.418 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:12:26.418 "listen_address": { 00:12:26.418 "trtype": "TCP", 00:12:26.418 "adrfam": "IPv4", 00:12:26.418 "traddr": "10.0.0.3", 00:12:26.418 "trsvcid": "4420" 00:12:26.418 }, 00:12:26.418 "peer_address": { 00:12:26.418 "trtype": "TCP", 00:12:26.418 "adrfam": "IPv4", 00:12:26.418 "traddr": "10.0.0.1", 00:12:26.418 "trsvcid": "51642" 00:12:26.418 }, 00:12:26.418 "auth": { 00:12:26.418 "state": "completed", 00:12:26.418 "digest": "sha384", 00:12:26.418 "dhgroup": "ffdhe4096" 00:12:26.418 } 00:12:26.418 } 00:12:26.418 ]' 00:12:26.418 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:26.418 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:26.418 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:26.418 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:26.418 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:26.677 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:26.677 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:26.677 12:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:26.937 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjljZjJhY2JhNzM1ZDY5NmRlN2MzYTg0YTk5YjkyMmNmNTdlOGZmODEzYzlhYWY5CxoIFA==: --dhchap-ctrl-secret DHHC-1:01:ODliMTgxYzlkZWU2MTg3ZmI1MDJmYWQ5ZDA3ZTdkZGKf3wvV: 00:12:26.937 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:02:NjljZjJhY2JhNzM1ZDY5NmRlN2MzYTg0YTk5YjkyMmNmNTdlOGZmODEzYzlhYWY5CxoIFA==: --dhchap-ctrl-secret DHHC-1:01:ODliMTgxYzlkZWU2MTg3ZmI1MDJmYWQ5ZDA3ZTdkZGKf3wvV: 00:12:27.506 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:27.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:27.506 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:12:27.506 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.506 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.506 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.506 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:27.506 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:27.506 12:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:27.765 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:12:27.765 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:27.765 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:27.765 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:27.765 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:27.765 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:27.765 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key3 00:12:27.765 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.765 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.024 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.024 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:28.024 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:28.024 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:28.338 00:12:28.338 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:28.338 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:28.338 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:28.597 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:28.597 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:28.597 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.597 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.597 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.597 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:28.597 { 00:12:28.597 "cntlid": 79, 00:12:28.597 "qid": 0, 00:12:28.597 "state": "enabled", 00:12:28.597 "thread": "nvmf_tgt_poll_group_000", 00:12:28.597 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:12:28.597 "listen_address": { 00:12:28.597 "trtype": "TCP", 00:12:28.597 "adrfam": "IPv4", 00:12:28.597 "traddr": "10.0.0.3", 00:12:28.597 "trsvcid": "4420" 00:12:28.597 }, 00:12:28.597 "peer_address": { 00:12:28.597 "trtype": "TCP", 00:12:28.597 "adrfam": "IPv4", 00:12:28.597 "traddr": "10.0.0.1", 00:12:28.597 "trsvcid": "51676" 00:12:28.597 }, 00:12:28.597 "auth": { 00:12:28.597 "state": "completed", 00:12:28.597 "digest": "sha384", 00:12:28.597 "dhgroup": "ffdhe4096" 00:12:28.597 } 00:12:28.597 } 00:12:28.597 ]' 00:12:28.597 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:28.597 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:28.597 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:28.597 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:28.597 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:28.856 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:28.856 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:28.856 12:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:29.115 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDFjMDM0MzY3ZWNiN2NiMzZkNWNhMzZiMmI4NDA0YmI0ZGEwNmVkNmUxMmM3MzEwOGQ0N2NlODZhMmIwOTA0YodrmJs=: 00:12:29.115 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:03:ZDFjMDM0MzY3ZWNiN2NiMzZkNWNhMzZiMmI4NDA0YmI0ZGEwNmVkNmUxMmM3MzEwOGQ0N2NlODZhMmIwOTA0YodrmJs=: 00:12:29.684 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:29.684 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:29.684 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:12:29.684 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.684 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.684 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.684 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:29.684 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:29.684 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:29.684 12:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:29.943 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:12:29.943 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:29.943 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:29.943 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:29.943 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:29.943 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:29.943 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:29.943 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.943 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.943 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.943 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:29.943 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:29.943 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:30.511 00:12:30.511 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:30.511 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:30.511 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:30.770 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:30.770 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:30.770 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.770 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.770 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.770 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:30.770 { 00:12:30.770 "cntlid": 81, 00:12:30.770 "qid": 0, 00:12:30.770 "state": "enabled", 00:12:30.770 "thread": "nvmf_tgt_poll_group_000", 00:12:30.770 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:12:30.770 "listen_address": { 00:12:30.770 "trtype": "TCP", 00:12:30.770 "adrfam": "IPv4", 00:12:30.770 "traddr": "10.0.0.3", 00:12:30.770 "trsvcid": "4420" 00:12:30.770 }, 00:12:30.770 "peer_address": { 00:12:30.770 "trtype": "TCP", 00:12:30.770 "adrfam": "IPv4", 00:12:30.770 "traddr": "10.0.0.1", 00:12:30.770 "trsvcid": "51708" 00:12:30.770 }, 00:12:30.770 "auth": { 00:12:30.770 "state": "completed", 00:12:30.770 "digest": "sha384", 00:12:30.770 "dhgroup": "ffdhe6144" 00:12:30.770 } 00:12:30.770 } 00:12:30.770 ]' 00:12:30.770 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:30.770 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:30.770 12:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:31.029 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:31.029 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:31.029 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:31.029 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:31.029 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:31.288 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWY1ZWNiNmFmMjJjOWUzYzY2MTdmZDQwZDZjYzRhMGRjYTJjY2JkZDZhY2RmNDcxkQB06A==: --dhchap-ctrl-secret DHHC-1:03:ODBhZmIzNDg0ZDRkOTRhZDExMjBlMjA1M2NiNWMxZjA0YzM5ZDVjYjUyNTg2ZTJjY2EzZDRhNjNlYjAyYWQ4Y+Ou8xQ=: 00:12:31.288 12:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:00:YWY1ZWNiNmFmMjJjOWUzYzY2MTdmZDQwZDZjYzRhMGRjYTJjY2JkZDZhY2RmNDcxkQB06A==: --dhchap-ctrl-secret DHHC-1:03:ODBhZmIzNDg0ZDRkOTRhZDExMjBlMjA1M2NiNWMxZjA0YzM5ZDVjYjUyNTg2ZTJjY2EzZDRhNjNlYjAyYWQ4Y+Ou8xQ=: 00:12:31.857 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:31.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:31.857 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:12:31.857 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.857 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.857 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.857 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:31.857 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:31.857 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:32.116 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:12:32.116 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:32.116 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:32.116 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:32.116 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:32.116 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:32.116 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:32.116 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.116 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.116 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.116 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:32.116 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:32.116 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:32.684 00:12:32.684 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:32.684 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:32.684 12:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:32.943 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:32.943 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:32.943 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.943 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.943 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.943 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:32.943 { 00:12:32.943 "cntlid": 83, 00:12:32.943 "qid": 0, 00:12:32.943 "state": "enabled", 00:12:32.943 "thread": "nvmf_tgt_poll_group_000", 00:12:32.943 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:12:32.943 "listen_address": { 00:12:32.943 "trtype": "TCP", 00:12:32.943 "adrfam": "IPv4", 00:12:32.943 "traddr": "10.0.0.3", 00:12:32.943 "trsvcid": "4420" 00:12:32.943 }, 00:12:32.943 "peer_address": { 00:12:32.943 "trtype": "TCP", 00:12:32.943 "adrfam": "IPv4", 00:12:32.943 "traddr": "10.0.0.1", 00:12:32.943 "trsvcid": "51728" 00:12:32.943 }, 00:12:32.943 "auth": { 00:12:32.943 "state": "completed", 00:12:32.943 "digest": "sha384", 00:12:32.943 "dhgroup": "ffdhe6144" 00:12:32.943 } 00:12:32.943 } 00:12:32.943 ]' 00:12:32.943 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:32.943 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:32.943 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:32.943 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:32.943 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:33.202 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:33.202 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:33.202 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:33.461 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY5NDg2NTQzZjY1M2NhMGY0MTI4N2RmYTVlOGFiYmNRMQ8f: --dhchap-ctrl-secret DHHC-1:02:ZDBiNGU1YWY4ZDJlOWJhZTBmNDk5MjdkMDYzY2RjZWE1NTNhOGZiODBhYjBkN2NhrXUOSg==: 00:12:33.461 12:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:01:YTY5NDg2NTQzZjY1M2NhMGY0MTI4N2RmYTVlOGFiYmNRMQ8f: --dhchap-ctrl-secret DHHC-1:02:ZDBiNGU1YWY4ZDJlOWJhZTBmNDk5MjdkMDYzY2RjZWE1NTNhOGZiODBhYjBkN2NhrXUOSg==: 00:12:34.029 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:34.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:34.029 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:12:34.029 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.029 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.029 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.029 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:34.029 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:34.029 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:34.288 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:12:34.288 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:34.288 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:34.288 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:34.288 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:34.288 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:34.288 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:34.288 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.288 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.288 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.288 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:34.288 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:34.288 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:34.856 00:12:34.856 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:34.856 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:34.856 12:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:35.115 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:35.115 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:35.115 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.115 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.115 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.115 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:35.115 { 00:12:35.115 "cntlid": 85, 00:12:35.115 "qid": 0, 00:12:35.115 "state": "enabled", 00:12:35.115 "thread": "nvmf_tgt_poll_group_000", 00:12:35.115 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:12:35.115 "listen_address": { 00:12:35.115 "trtype": "TCP", 00:12:35.115 "adrfam": "IPv4", 00:12:35.115 "traddr": "10.0.0.3", 00:12:35.115 "trsvcid": "4420" 00:12:35.115 }, 00:12:35.115 "peer_address": { 00:12:35.115 "trtype": "TCP", 00:12:35.115 "adrfam": "IPv4", 00:12:35.115 "traddr": "10.0.0.1", 00:12:35.115 "trsvcid": "56136" 00:12:35.115 }, 00:12:35.115 "auth": { 00:12:35.115 "state": "completed", 00:12:35.115 "digest": "sha384", 00:12:35.115 "dhgroup": "ffdhe6144" 00:12:35.115 } 00:12:35.115 } 00:12:35.115 ]' 00:12:35.115 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:35.115 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:35.115 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:35.115 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:35.115 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:35.115 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:35.115 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:35.115 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:35.374 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjljZjJhY2JhNzM1ZDY5NmRlN2MzYTg0YTk5YjkyMmNmNTdlOGZmODEzYzlhYWY5CxoIFA==: --dhchap-ctrl-secret DHHC-1:01:ODliMTgxYzlkZWU2MTg3ZmI1MDJmYWQ5ZDA3ZTdkZGKf3wvV: 00:12:35.374 12:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:02:NjljZjJhY2JhNzM1ZDY5NmRlN2MzYTg0YTk5YjkyMmNmNTdlOGZmODEzYzlhYWY5CxoIFA==: --dhchap-ctrl-secret DHHC-1:01:ODliMTgxYzlkZWU2MTg3ZmI1MDJmYWQ5ZDA3ZTdkZGKf3wvV: 00:12:35.942 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:35.942 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:35.942 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:12:35.942 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.942 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.942 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.942 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:35.942 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:35.942 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:36.200 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:12:36.200 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:36.200 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:36.200 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:36.200 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:36.200 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:36.200 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key3 00:12:36.200 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.200 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.459 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.459 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:36.459 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:36.459 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:36.717 00:12:36.717 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:36.717 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:36.717 12:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:37.284 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:37.284 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:37.284 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.284 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.284 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.284 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:37.284 { 00:12:37.284 "cntlid": 87, 00:12:37.284 "qid": 0, 00:12:37.284 "state": "enabled", 00:12:37.284 "thread": "nvmf_tgt_poll_group_000", 00:12:37.284 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:12:37.284 "listen_address": { 00:12:37.284 "trtype": "TCP", 00:12:37.284 "adrfam": "IPv4", 00:12:37.284 "traddr": "10.0.0.3", 00:12:37.284 "trsvcid": "4420" 00:12:37.284 }, 00:12:37.284 "peer_address": { 00:12:37.284 "trtype": "TCP", 00:12:37.284 "adrfam": "IPv4", 00:12:37.284 "traddr": "10.0.0.1", 00:12:37.284 "trsvcid": "56170" 00:12:37.284 }, 00:12:37.284 "auth": { 00:12:37.284 "state": "completed", 00:12:37.284 "digest": "sha384", 00:12:37.284 "dhgroup": "ffdhe6144" 00:12:37.284 } 00:12:37.284 } 00:12:37.284 ]' 00:12:37.284 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:37.284 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:37.284 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:37.284 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:37.284 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:37.284 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:37.284 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:37.284 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:37.543 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDFjMDM0MzY3ZWNiN2NiMzZkNWNhMzZiMmI4NDA0YmI0ZGEwNmVkNmUxMmM3MzEwOGQ0N2NlODZhMmIwOTA0YodrmJs=: 00:12:37.543 12:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:03:ZDFjMDM0MzY3ZWNiN2NiMzZkNWNhMzZiMmI4NDA0YmI0ZGEwNmVkNmUxMmM3MzEwOGQ0N2NlODZhMmIwOTA0YodrmJs=: 00:12:38.532 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:38.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:38.532 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:12:38.532 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.532 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.532 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.532 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:38.532 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:38.532 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:38.532 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:38.532 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:12:38.532 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:38.532 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:38.532 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:38.532 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:38.532 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:38.532 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:38.533 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.533 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.533 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.533 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:38.533 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:38.533 12:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:39.469 00:12:39.469 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:39.469 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:39.469 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:39.469 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:39.469 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:39.469 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.469 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.469 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.469 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:39.469 { 00:12:39.469 "cntlid": 89, 00:12:39.469 "qid": 0, 00:12:39.469 "state": "enabled", 00:12:39.469 "thread": "nvmf_tgt_poll_group_000", 00:12:39.469 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:12:39.469 "listen_address": { 00:12:39.469 "trtype": "TCP", 00:12:39.469 "adrfam": "IPv4", 00:12:39.469 "traddr": "10.0.0.3", 00:12:39.469 "trsvcid": "4420" 00:12:39.469 }, 00:12:39.469 "peer_address": { 00:12:39.469 "trtype": "TCP", 00:12:39.469 "adrfam": "IPv4", 00:12:39.469 "traddr": "10.0.0.1", 00:12:39.469 "trsvcid": "56202" 00:12:39.469 }, 00:12:39.469 "auth": { 00:12:39.469 "state": "completed", 00:12:39.469 "digest": "sha384", 00:12:39.469 "dhgroup": "ffdhe8192" 00:12:39.469 } 00:12:39.469 } 00:12:39.469 ]' 00:12:39.470 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:39.729 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:39.729 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:39.729 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:39.729 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:39.729 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:39.729 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:39.729 12:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:39.989 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWY1ZWNiNmFmMjJjOWUzYzY2MTdmZDQwZDZjYzRhMGRjYTJjY2JkZDZhY2RmNDcxkQB06A==: --dhchap-ctrl-secret DHHC-1:03:ODBhZmIzNDg0ZDRkOTRhZDExMjBlMjA1M2NiNWMxZjA0YzM5ZDVjYjUyNTg2ZTJjY2EzZDRhNjNlYjAyYWQ4Y+Ou8xQ=: 00:12:39.989 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:00:YWY1ZWNiNmFmMjJjOWUzYzY2MTdmZDQwZDZjYzRhMGRjYTJjY2JkZDZhY2RmNDcxkQB06A==: --dhchap-ctrl-secret DHHC-1:03:ODBhZmIzNDg0ZDRkOTRhZDExMjBlMjA1M2NiNWMxZjA0YzM5ZDVjYjUyNTg2ZTJjY2EzZDRhNjNlYjAyYWQ4Y+Ou8xQ=: 00:12:40.926 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:40.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:40.926 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:12:40.926 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.926 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.926 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.926 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:40.926 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:40.926 12:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:40.926 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:12:40.926 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:40.926 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:40.926 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:40.926 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:40.926 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:40.926 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:40.926 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.926 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.926 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.926 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:40.926 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:40.926 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:41.864 00:12:41.865 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:41.865 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:41.865 12:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:41.865 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:41.865 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:41.865 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.865 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.123 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.123 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:42.123 { 00:12:42.123 "cntlid": 91, 00:12:42.123 "qid": 0, 00:12:42.123 "state": "enabled", 00:12:42.123 "thread": "nvmf_tgt_poll_group_000", 00:12:42.123 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:12:42.123 "listen_address": { 00:12:42.123 "trtype": "TCP", 00:12:42.123 "adrfam": "IPv4", 00:12:42.123 "traddr": "10.0.0.3", 00:12:42.123 "trsvcid": "4420" 00:12:42.123 }, 00:12:42.123 "peer_address": { 00:12:42.123 "trtype": "TCP", 00:12:42.123 "adrfam": "IPv4", 00:12:42.123 "traddr": "10.0.0.1", 00:12:42.123 "trsvcid": "56240" 00:12:42.123 }, 00:12:42.123 "auth": { 00:12:42.123 "state": "completed", 00:12:42.123 "digest": "sha384", 00:12:42.123 "dhgroup": "ffdhe8192" 00:12:42.123 } 00:12:42.123 } 00:12:42.123 ]' 00:12:42.123 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:42.123 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:42.123 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:42.123 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:42.123 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:42.123 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:42.123 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:42.123 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:42.383 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY5NDg2NTQzZjY1M2NhMGY0MTI4N2RmYTVlOGFiYmNRMQ8f: --dhchap-ctrl-secret DHHC-1:02:ZDBiNGU1YWY4ZDJlOWJhZTBmNDk5MjdkMDYzY2RjZWE1NTNhOGZiODBhYjBkN2NhrXUOSg==: 00:12:42.383 12:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:01:YTY5NDg2NTQzZjY1M2NhMGY0MTI4N2RmYTVlOGFiYmNRMQ8f: --dhchap-ctrl-secret DHHC-1:02:ZDBiNGU1YWY4ZDJlOWJhZTBmNDk5MjdkMDYzY2RjZWE1NTNhOGZiODBhYjBkN2NhrXUOSg==: 00:12:42.951 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:42.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:42.951 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:12:42.951 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.951 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.951 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.951 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:42.951 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:42.951 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:43.211 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:12:43.211 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:43.211 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:43.211 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:43.211 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:43.211 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:43.211 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:43.211 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.211 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.470 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.470 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:43.470 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:43.470 12:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:44.040 00:12:44.040 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:44.040 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:44.040 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:44.299 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:44.299 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:44.299 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.299 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.299 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.299 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:44.299 { 00:12:44.299 "cntlid": 93, 00:12:44.299 "qid": 0, 00:12:44.299 "state": "enabled", 00:12:44.299 "thread": "nvmf_tgt_poll_group_000", 00:12:44.299 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:12:44.299 "listen_address": { 00:12:44.299 "trtype": "TCP", 00:12:44.299 "adrfam": "IPv4", 00:12:44.299 "traddr": "10.0.0.3", 00:12:44.299 "trsvcid": "4420" 00:12:44.299 }, 00:12:44.299 "peer_address": { 00:12:44.299 "trtype": "TCP", 00:12:44.299 "adrfam": "IPv4", 00:12:44.299 "traddr": "10.0.0.1", 00:12:44.299 "trsvcid": "56274" 00:12:44.299 }, 00:12:44.299 "auth": { 00:12:44.299 "state": "completed", 00:12:44.299 "digest": "sha384", 00:12:44.299 "dhgroup": "ffdhe8192" 00:12:44.299 } 00:12:44.299 } 00:12:44.299 ]' 00:12:44.299 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:44.299 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:44.299 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:44.299 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:44.299 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:44.299 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:44.299 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:44.299 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:44.867 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjljZjJhY2JhNzM1ZDY5NmRlN2MzYTg0YTk5YjkyMmNmNTdlOGZmODEzYzlhYWY5CxoIFA==: --dhchap-ctrl-secret DHHC-1:01:ODliMTgxYzlkZWU2MTg3ZmI1MDJmYWQ5ZDA3ZTdkZGKf3wvV: 00:12:44.867 12:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:02:NjljZjJhY2JhNzM1ZDY5NmRlN2MzYTg0YTk5YjkyMmNmNTdlOGZmODEzYzlhYWY5CxoIFA==: --dhchap-ctrl-secret DHHC-1:01:ODliMTgxYzlkZWU2MTg3ZmI1MDJmYWQ5ZDA3ZTdkZGKf3wvV: 00:12:45.435 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:45.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:45.435 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:12:45.435 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.435 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.435 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.435 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:45.435 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:45.435 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:45.695 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:12:45.695 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:45.695 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:45.695 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:45.695 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:45.695 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:45.695 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key3 00:12:45.695 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.695 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.695 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.695 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:45.695 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:45.695 12:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:46.263 00:12:46.263 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:46.263 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:46.263 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:46.832 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:46.832 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:46.832 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.832 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.832 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.832 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:46.832 { 00:12:46.832 "cntlid": 95, 00:12:46.832 "qid": 0, 00:12:46.832 "state": "enabled", 00:12:46.832 "thread": "nvmf_tgt_poll_group_000", 00:12:46.832 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:12:46.832 "listen_address": { 00:12:46.832 "trtype": "TCP", 00:12:46.832 "adrfam": "IPv4", 00:12:46.832 "traddr": "10.0.0.3", 00:12:46.832 "trsvcid": "4420" 00:12:46.832 }, 00:12:46.832 "peer_address": { 00:12:46.832 "trtype": "TCP", 00:12:46.832 "adrfam": "IPv4", 00:12:46.832 "traddr": "10.0.0.1", 00:12:46.832 "trsvcid": "53248" 00:12:46.832 }, 00:12:46.832 "auth": { 00:12:46.832 "state": "completed", 00:12:46.832 "digest": "sha384", 00:12:46.833 "dhgroup": "ffdhe8192" 00:12:46.833 } 00:12:46.833 } 00:12:46.833 ]' 00:12:46.833 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:46.833 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:46.833 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:46.833 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:46.833 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:46.833 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:46.833 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:46.833 12:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:47.092 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDFjMDM0MzY3ZWNiN2NiMzZkNWNhMzZiMmI4NDA0YmI0ZGEwNmVkNmUxMmM3MzEwOGQ0N2NlODZhMmIwOTA0YodrmJs=: 00:12:47.092 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:03:ZDFjMDM0MzY3ZWNiN2NiMzZkNWNhMzZiMmI4NDA0YmI0ZGEwNmVkNmUxMmM3MzEwOGQ0N2NlODZhMmIwOTA0YodrmJs=: 00:12:47.660 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:47.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:47.660 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:12:47.660 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.660 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.660 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.660 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:47.660 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:47.660 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:47.660 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:47.660 12:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:48.229 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:12:48.229 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:48.229 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:48.229 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:48.229 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:48.229 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:48.229 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:48.229 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.229 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.229 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.229 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:48.229 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:48.229 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:48.488 00:12:48.488 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:48.488 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:48.488 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:48.747 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:48.747 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:48.747 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.747 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.747 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.747 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:48.747 { 00:12:48.747 "cntlid": 97, 00:12:48.747 "qid": 0, 00:12:48.747 "state": "enabled", 00:12:48.747 "thread": "nvmf_tgt_poll_group_000", 00:12:48.747 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:12:48.747 "listen_address": { 00:12:48.747 "trtype": "TCP", 00:12:48.747 "adrfam": "IPv4", 00:12:48.747 "traddr": "10.0.0.3", 00:12:48.747 "trsvcid": "4420" 00:12:48.747 }, 00:12:48.747 "peer_address": { 00:12:48.747 "trtype": "TCP", 00:12:48.747 "adrfam": "IPv4", 00:12:48.747 "traddr": "10.0.0.1", 00:12:48.747 "trsvcid": "53274" 00:12:48.747 }, 00:12:48.747 "auth": { 00:12:48.747 "state": "completed", 00:12:48.747 "digest": "sha512", 00:12:48.747 "dhgroup": "null" 00:12:48.747 } 00:12:48.747 } 00:12:48.747 ]' 00:12:48.747 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:48.747 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:48.747 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:48.747 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:48.747 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:48.747 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:48.747 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:48.747 12:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:49.006 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWY1ZWNiNmFmMjJjOWUzYzY2MTdmZDQwZDZjYzRhMGRjYTJjY2JkZDZhY2RmNDcxkQB06A==: --dhchap-ctrl-secret DHHC-1:03:ODBhZmIzNDg0ZDRkOTRhZDExMjBlMjA1M2NiNWMxZjA0YzM5ZDVjYjUyNTg2ZTJjY2EzZDRhNjNlYjAyYWQ4Y+Ou8xQ=: 00:12:49.006 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:00:YWY1ZWNiNmFmMjJjOWUzYzY2MTdmZDQwZDZjYzRhMGRjYTJjY2JkZDZhY2RmNDcxkQB06A==: --dhchap-ctrl-secret DHHC-1:03:ODBhZmIzNDg0ZDRkOTRhZDExMjBlMjA1M2NiNWMxZjA0YzM5ZDVjYjUyNTg2ZTJjY2EzZDRhNjNlYjAyYWQ4Y+Ou8xQ=: 00:12:49.575 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:49.575 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:49.575 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:12:49.575 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.575 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.575 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.575 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:49.575 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:49.575 12:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:49.834 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:12:49.834 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:49.834 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:49.834 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:49.835 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:49.835 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:49.835 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:49.835 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.835 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.835 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.835 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:49.835 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:50.093 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:50.352 00:12:50.353 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:50.353 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:50.353 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:50.612 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:50.612 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:50.612 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.612 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.612 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.612 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:50.612 { 00:12:50.612 "cntlid": 99, 00:12:50.612 "qid": 0, 00:12:50.612 "state": "enabled", 00:12:50.612 "thread": "nvmf_tgt_poll_group_000", 00:12:50.612 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:12:50.612 "listen_address": { 00:12:50.612 "trtype": "TCP", 00:12:50.612 "adrfam": "IPv4", 00:12:50.612 "traddr": "10.0.0.3", 00:12:50.612 "trsvcid": "4420" 00:12:50.612 }, 00:12:50.612 "peer_address": { 00:12:50.612 "trtype": "TCP", 00:12:50.612 "adrfam": "IPv4", 00:12:50.612 "traddr": "10.0.0.1", 00:12:50.612 "trsvcid": "53304" 00:12:50.612 }, 00:12:50.612 "auth": { 00:12:50.612 "state": "completed", 00:12:50.612 "digest": "sha512", 00:12:50.612 "dhgroup": "null" 00:12:50.612 } 00:12:50.612 } 00:12:50.612 ]' 00:12:50.612 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:50.612 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:50.612 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:50.612 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:50.612 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:50.612 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:50.612 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:50.612 12:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:51.185 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY5NDg2NTQzZjY1M2NhMGY0MTI4N2RmYTVlOGFiYmNRMQ8f: --dhchap-ctrl-secret DHHC-1:02:ZDBiNGU1YWY4ZDJlOWJhZTBmNDk5MjdkMDYzY2RjZWE1NTNhOGZiODBhYjBkN2NhrXUOSg==: 00:12:51.185 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:01:YTY5NDg2NTQzZjY1M2NhMGY0MTI4N2RmYTVlOGFiYmNRMQ8f: --dhchap-ctrl-secret DHHC-1:02:ZDBiNGU1YWY4ZDJlOWJhZTBmNDk5MjdkMDYzY2RjZWE1NTNhOGZiODBhYjBkN2NhrXUOSg==: 00:12:51.760 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:51.760 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:51.760 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:12:51.760 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.760 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.760 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.760 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:51.760 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:51.760 12:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:52.019 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:12:52.019 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:52.019 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:52.019 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:52.019 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:52.019 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:52.019 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.019 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.019 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.019 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.019 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.019 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.019 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.279 00:12:52.279 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:52.279 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:52.279 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:52.538 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:52.538 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:52.538 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.538 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.538 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.538 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:52.538 { 00:12:52.538 "cntlid": 101, 00:12:52.538 "qid": 0, 00:12:52.538 "state": "enabled", 00:12:52.538 "thread": "nvmf_tgt_poll_group_000", 00:12:52.538 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:12:52.538 "listen_address": { 00:12:52.538 "trtype": "TCP", 00:12:52.538 "adrfam": "IPv4", 00:12:52.538 "traddr": "10.0.0.3", 00:12:52.538 "trsvcid": "4420" 00:12:52.538 }, 00:12:52.538 "peer_address": { 00:12:52.538 "trtype": "TCP", 00:12:52.538 "adrfam": "IPv4", 00:12:52.538 "traddr": "10.0.0.1", 00:12:52.538 "trsvcid": "53336" 00:12:52.538 }, 00:12:52.538 "auth": { 00:12:52.538 "state": "completed", 00:12:52.538 "digest": "sha512", 00:12:52.538 "dhgroup": "null" 00:12:52.538 } 00:12:52.538 } 00:12:52.538 ]' 00:12:52.539 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:52.539 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:52.539 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:52.539 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:52.539 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:52.539 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:52.539 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:52.539 12:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:52.798 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjljZjJhY2JhNzM1ZDY5NmRlN2MzYTg0YTk5YjkyMmNmNTdlOGZmODEzYzlhYWY5CxoIFA==: --dhchap-ctrl-secret DHHC-1:01:ODliMTgxYzlkZWU2MTg3ZmI1MDJmYWQ5ZDA3ZTdkZGKf3wvV: 00:12:52.799 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:02:NjljZjJhY2JhNzM1ZDY5NmRlN2MzYTg0YTk5YjkyMmNmNTdlOGZmODEzYzlhYWY5CxoIFA==: --dhchap-ctrl-secret DHHC-1:01:ODliMTgxYzlkZWU2MTg3ZmI1MDJmYWQ5ZDA3ZTdkZGKf3wvV: 00:12:53.367 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:53.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:53.626 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:12:53.626 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.626 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.626 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.626 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:53.626 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:53.626 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:53.626 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:12:53.626 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:53.626 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:53.626 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:53.626 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:53.626 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:53.626 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key3 00:12:53.626 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.626 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.886 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.886 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:53.886 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:53.886 12:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:54.148 00:12:54.148 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:54.148 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:54.148 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:54.409 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:54.409 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:54.409 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.409 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.409 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.409 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:54.409 { 00:12:54.409 "cntlid": 103, 00:12:54.409 "qid": 0, 00:12:54.409 "state": "enabled", 00:12:54.409 "thread": "nvmf_tgt_poll_group_000", 00:12:54.409 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:12:54.409 "listen_address": { 00:12:54.409 "trtype": "TCP", 00:12:54.409 "adrfam": "IPv4", 00:12:54.409 "traddr": "10.0.0.3", 00:12:54.409 "trsvcid": "4420" 00:12:54.409 }, 00:12:54.409 "peer_address": { 00:12:54.409 "trtype": "TCP", 00:12:54.409 "adrfam": "IPv4", 00:12:54.409 "traddr": "10.0.0.1", 00:12:54.409 "trsvcid": "53372" 00:12:54.409 }, 00:12:54.409 "auth": { 00:12:54.409 "state": "completed", 00:12:54.409 "digest": "sha512", 00:12:54.409 "dhgroup": "null" 00:12:54.409 } 00:12:54.409 } 00:12:54.409 ]' 00:12:54.409 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:54.409 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:54.409 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:54.409 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:54.409 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:54.409 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:54.409 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:54.409 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:54.977 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDFjMDM0MzY3ZWNiN2NiMzZkNWNhMzZiMmI4NDA0YmI0ZGEwNmVkNmUxMmM3MzEwOGQ0N2NlODZhMmIwOTA0YodrmJs=: 00:12:54.977 12:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:03:ZDFjMDM0MzY3ZWNiN2NiMzZkNWNhMzZiMmI4NDA0YmI0ZGEwNmVkNmUxMmM3MzEwOGQ0N2NlODZhMmIwOTA0YodrmJs=: 00:12:55.545 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:55.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:55.545 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:12:55.545 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.545 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.545 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.545 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:55.545 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:55.545 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:55.545 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:55.804 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:12:55.804 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:55.804 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:55.804 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:55.804 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:55.804 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:55.804 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:55.804 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.804 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.804 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.804 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:55.804 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:55.804 12:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.063 00:12:56.063 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:56.063 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:56.063 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:56.323 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:56.323 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:56.323 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.323 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.323 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.323 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:56.323 { 00:12:56.323 "cntlid": 105, 00:12:56.323 "qid": 0, 00:12:56.323 "state": "enabled", 00:12:56.323 "thread": "nvmf_tgt_poll_group_000", 00:12:56.323 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:12:56.323 "listen_address": { 00:12:56.323 "trtype": "TCP", 00:12:56.323 "adrfam": "IPv4", 00:12:56.323 "traddr": "10.0.0.3", 00:12:56.323 "trsvcid": "4420" 00:12:56.323 }, 00:12:56.323 "peer_address": { 00:12:56.323 "trtype": "TCP", 00:12:56.323 "adrfam": "IPv4", 00:12:56.323 "traddr": "10.0.0.1", 00:12:56.323 "trsvcid": "38662" 00:12:56.323 }, 00:12:56.323 "auth": { 00:12:56.323 "state": "completed", 00:12:56.323 "digest": "sha512", 00:12:56.323 "dhgroup": "ffdhe2048" 00:12:56.323 } 00:12:56.323 } 00:12:56.323 ]' 00:12:56.323 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:56.582 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:56.582 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:56.582 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:56.582 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:56.582 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:56.582 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:56.582 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:56.841 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWY1ZWNiNmFmMjJjOWUzYzY2MTdmZDQwZDZjYzRhMGRjYTJjY2JkZDZhY2RmNDcxkQB06A==: --dhchap-ctrl-secret DHHC-1:03:ODBhZmIzNDg0ZDRkOTRhZDExMjBlMjA1M2NiNWMxZjA0YzM5ZDVjYjUyNTg2ZTJjY2EzZDRhNjNlYjAyYWQ4Y+Ou8xQ=: 00:12:56.841 12:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:00:YWY1ZWNiNmFmMjJjOWUzYzY2MTdmZDQwZDZjYzRhMGRjYTJjY2JkZDZhY2RmNDcxkQB06A==: --dhchap-ctrl-secret DHHC-1:03:ODBhZmIzNDg0ZDRkOTRhZDExMjBlMjA1M2NiNWMxZjA0YzM5ZDVjYjUyNTg2ZTJjY2EzZDRhNjNlYjAyYWQ4Y+Ou8xQ=: 00:12:57.409 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:57.669 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:57.669 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:12:57.669 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.669 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.669 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.669 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:57.669 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:57.669 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:57.928 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:12:57.928 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:57.928 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:57.928 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:57.928 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:57.928 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:57.928 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:57.928 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.928 12:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.928 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.928 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:57.928 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:57.928 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.187 00:12:58.187 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:58.187 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:58.187 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:58.447 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:58.447 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:58.447 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.447 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.447 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.447 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:58.447 { 00:12:58.447 "cntlid": 107, 00:12:58.447 "qid": 0, 00:12:58.447 "state": "enabled", 00:12:58.447 "thread": "nvmf_tgt_poll_group_000", 00:12:58.447 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:12:58.447 "listen_address": { 00:12:58.447 "trtype": "TCP", 00:12:58.447 "adrfam": "IPv4", 00:12:58.447 "traddr": "10.0.0.3", 00:12:58.447 "trsvcid": "4420" 00:12:58.447 }, 00:12:58.447 "peer_address": { 00:12:58.447 "trtype": "TCP", 00:12:58.447 "adrfam": "IPv4", 00:12:58.447 "traddr": "10.0.0.1", 00:12:58.447 "trsvcid": "38688" 00:12:58.447 }, 00:12:58.447 "auth": { 00:12:58.447 "state": "completed", 00:12:58.447 "digest": "sha512", 00:12:58.447 "dhgroup": "ffdhe2048" 00:12:58.447 } 00:12:58.447 } 00:12:58.447 ]' 00:12:58.447 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:58.447 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:58.447 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:58.707 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:58.707 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:58.707 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:58.707 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:58.707 12:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:58.966 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY5NDg2NTQzZjY1M2NhMGY0MTI4N2RmYTVlOGFiYmNRMQ8f: --dhchap-ctrl-secret DHHC-1:02:ZDBiNGU1YWY4ZDJlOWJhZTBmNDk5MjdkMDYzY2RjZWE1NTNhOGZiODBhYjBkN2NhrXUOSg==: 00:12:58.966 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:01:YTY5NDg2NTQzZjY1M2NhMGY0MTI4N2RmYTVlOGFiYmNRMQ8f: --dhchap-ctrl-secret DHHC-1:02:ZDBiNGU1YWY4ZDJlOWJhZTBmNDk5MjdkMDYzY2RjZWE1NTNhOGZiODBhYjBkN2NhrXUOSg==: 00:12:59.535 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:59.535 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:59.535 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:12:59.535 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.535 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.535 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.535 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:59.535 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:59.535 12:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:59.794 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:12:59.794 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:59.794 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:59.794 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:59.794 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:59.794 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:59.794 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:59.794 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.794 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.794 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.794 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:59.794 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:59.794 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.363 00:13:00.363 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:00.363 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:00.363 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:00.622 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:00.622 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:00.622 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.622 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.622 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.622 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:00.622 { 00:13:00.622 "cntlid": 109, 00:13:00.622 "qid": 0, 00:13:00.622 "state": "enabled", 00:13:00.622 "thread": "nvmf_tgt_poll_group_000", 00:13:00.622 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:13:00.622 "listen_address": { 00:13:00.622 "trtype": "TCP", 00:13:00.622 "adrfam": "IPv4", 00:13:00.622 "traddr": "10.0.0.3", 00:13:00.622 "trsvcid": "4420" 00:13:00.622 }, 00:13:00.622 "peer_address": { 00:13:00.622 "trtype": "TCP", 00:13:00.622 "adrfam": "IPv4", 00:13:00.622 "traddr": "10.0.0.1", 00:13:00.622 "trsvcid": "38712" 00:13:00.622 }, 00:13:00.622 "auth": { 00:13:00.622 "state": "completed", 00:13:00.622 "digest": "sha512", 00:13:00.622 "dhgroup": "ffdhe2048" 00:13:00.622 } 00:13:00.622 } 00:13:00.622 ]' 00:13:00.622 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:00.622 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:00.622 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:00.622 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:00.622 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:00.622 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:00.622 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:00.622 12:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:00.881 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjljZjJhY2JhNzM1ZDY5NmRlN2MzYTg0YTk5YjkyMmNmNTdlOGZmODEzYzlhYWY5CxoIFA==: --dhchap-ctrl-secret DHHC-1:01:ODliMTgxYzlkZWU2MTg3ZmI1MDJmYWQ5ZDA3ZTdkZGKf3wvV: 00:13:00.881 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:02:NjljZjJhY2JhNzM1ZDY5NmRlN2MzYTg0YTk5YjkyMmNmNTdlOGZmODEzYzlhYWY5CxoIFA==: --dhchap-ctrl-secret DHHC-1:01:ODliMTgxYzlkZWU2MTg3ZmI1MDJmYWQ5ZDA3ZTdkZGKf3wvV: 00:13:01.817 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:01.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:01.817 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:13:01.817 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.817 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.817 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.817 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:01.817 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:01.817 12:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:01.817 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:13:01.817 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:01.817 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:01.817 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:01.817 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:01.817 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:01.817 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key3 00:13:01.817 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.817 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.077 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.077 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:02.077 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:02.077 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:02.335 00:13:02.335 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:02.335 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:02.335 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:02.593 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:02.593 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:02.593 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.593 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.593 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.593 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:02.593 { 00:13:02.593 "cntlid": 111, 00:13:02.593 "qid": 0, 00:13:02.593 "state": "enabled", 00:13:02.593 "thread": "nvmf_tgt_poll_group_000", 00:13:02.593 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:13:02.593 "listen_address": { 00:13:02.593 "trtype": "TCP", 00:13:02.593 "adrfam": "IPv4", 00:13:02.593 "traddr": "10.0.0.3", 00:13:02.593 "trsvcid": "4420" 00:13:02.593 }, 00:13:02.593 "peer_address": { 00:13:02.593 "trtype": "TCP", 00:13:02.593 "adrfam": "IPv4", 00:13:02.593 "traddr": "10.0.0.1", 00:13:02.593 "trsvcid": "38736" 00:13:02.593 }, 00:13:02.593 "auth": { 00:13:02.593 "state": "completed", 00:13:02.593 "digest": "sha512", 00:13:02.593 "dhgroup": "ffdhe2048" 00:13:02.593 } 00:13:02.593 } 00:13:02.593 ]' 00:13:02.593 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:02.593 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:02.593 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:02.593 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:02.593 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:02.593 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:02.593 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:02.593 12:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:03.161 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDFjMDM0MzY3ZWNiN2NiMzZkNWNhMzZiMmI4NDA0YmI0ZGEwNmVkNmUxMmM3MzEwOGQ0N2NlODZhMmIwOTA0YodrmJs=: 00:13:03.161 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:03:ZDFjMDM0MzY3ZWNiN2NiMzZkNWNhMzZiMmI4NDA0YmI0ZGEwNmVkNmUxMmM3MzEwOGQ0N2NlODZhMmIwOTA0YodrmJs=: 00:13:03.729 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:03.729 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:03.729 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:13:03.729 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.729 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.729 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.729 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:03.729 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:03.729 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:03.729 12:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:03.988 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:13:03.988 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:03.988 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:03.988 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:03.988 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:03.988 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:03.988 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:03.988 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.988 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.988 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.988 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:03.989 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:03.989 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:04.556 00:13:04.556 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:04.556 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:04.556 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:04.815 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:04.815 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:04.815 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.815 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.815 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.815 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:04.815 { 00:13:04.815 "cntlid": 113, 00:13:04.815 "qid": 0, 00:13:04.815 "state": "enabled", 00:13:04.815 "thread": "nvmf_tgt_poll_group_000", 00:13:04.815 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:13:04.815 "listen_address": { 00:13:04.815 "trtype": "TCP", 00:13:04.815 "adrfam": "IPv4", 00:13:04.815 "traddr": "10.0.0.3", 00:13:04.815 "trsvcid": "4420" 00:13:04.815 }, 00:13:04.815 "peer_address": { 00:13:04.815 "trtype": "TCP", 00:13:04.815 "adrfam": "IPv4", 00:13:04.815 "traddr": "10.0.0.1", 00:13:04.815 "trsvcid": "38754" 00:13:04.815 }, 00:13:04.815 "auth": { 00:13:04.815 "state": "completed", 00:13:04.815 "digest": "sha512", 00:13:04.815 "dhgroup": "ffdhe3072" 00:13:04.815 } 00:13:04.816 } 00:13:04.816 ]' 00:13:04.816 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:04.816 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:04.816 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:04.816 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:04.816 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:04.816 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:04.816 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:04.816 12:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:05.103 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWY1ZWNiNmFmMjJjOWUzYzY2MTdmZDQwZDZjYzRhMGRjYTJjY2JkZDZhY2RmNDcxkQB06A==: --dhchap-ctrl-secret DHHC-1:03:ODBhZmIzNDg0ZDRkOTRhZDExMjBlMjA1M2NiNWMxZjA0YzM5ZDVjYjUyNTg2ZTJjY2EzZDRhNjNlYjAyYWQ4Y+Ou8xQ=: 00:13:05.103 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:00:YWY1ZWNiNmFmMjJjOWUzYzY2MTdmZDQwZDZjYzRhMGRjYTJjY2JkZDZhY2RmNDcxkQB06A==: --dhchap-ctrl-secret DHHC-1:03:ODBhZmIzNDg0ZDRkOTRhZDExMjBlMjA1M2NiNWMxZjA0YzM5ZDVjYjUyNTg2ZTJjY2EzZDRhNjNlYjAyYWQ4Y+Ou8xQ=: 00:13:06.081 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:06.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:06.081 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:13:06.081 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.081 12:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.081 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.081 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:06.081 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:06.081 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:06.081 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:13:06.081 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:06.081 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:06.081 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:06.081 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:06.081 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:06.081 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.081 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.081 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.081 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.081 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.081 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.081 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.648 00:13:06.648 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:06.648 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:06.648 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:06.648 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:06.648 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:06.648 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.648 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.907 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.907 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:06.907 { 00:13:06.907 "cntlid": 115, 00:13:06.907 "qid": 0, 00:13:06.907 "state": "enabled", 00:13:06.907 "thread": "nvmf_tgt_poll_group_000", 00:13:06.907 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:13:06.907 "listen_address": { 00:13:06.907 "trtype": "TCP", 00:13:06.907 "adrfam": "IPv4", 00:13:06.907 "traddr": "10.0.0.3", 00:13:06.907 "trsvcid": "4420" 00:13:06.907 }, 00:13:06.907 "peer_address": { 00:13:06.907 "trtype": "TCP", 00:13:06.907 "adrfam": "IPv4", 00:13:06.907 "traddr": "10.0.0.1", 00:13:06.907 "trsvcid": "57746" 00:13:06.907 }, 00:13:06.907 "auth": { 00:13:06.907 "state": "completed", 00:13:06.907 "digest": "sha512", 00:13:06.907 "dhgroup": "ffdhe3072" 00:13:06.907 } 00:13:06.907 } 00:13:06.907 ]' 00:13:06.907 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:06.907 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:06.907 12:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:06.907 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:06.907 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:06.907 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:06.907 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:06.907 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:07.166 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY5NDg2NTQzZjY1M2NhMGY0MTI4N2RmYTVlOGFiYmNRMQ8f: --dhchap-ctrl-secret DHHC-1:02:ZDBiNGU1YWY4ZDJlOWJhZTBmNDk5MjdkMDYzY2RjZWE1NTNhOGZiODBhYjBkN2NhrXUOSg==: 00:13:07.166 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:01:YTY5NDg2NTQzZjY1M2NhMGY0MTI4N2RmYTVlOGFiYmNRMQ8f: --dhchap-ctrl-secret DHHC-1:02:ZDBiNGU1YWY4ZDJlOWJhZTBmNDk5MjdkMDYzY2RjZWE1NTNhOGZiODBhYjBkN2NhrXUOSg==: 00:13:07.734 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:07.734 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:07.734 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:13:07.734 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.734 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.734 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.734 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:07.734 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:07.734 12:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:07.993 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:13:07.993 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:07.993 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:07.993 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:07.993 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:07.993 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:07.993 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:07.993 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.993 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.993 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.993 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:07.993 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:07.993 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:08.560 00:13:08.560 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:08.560 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:08.560 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:08.819 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:08.819 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:08.819 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.819 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.819 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.819 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:08.819 { 00:13:08.819 "cntlid": 117, 00:13:08.819 "qid": 0, 00:13:08.819 "state": "enabled", 00:13:08.819 "thread": "nvmf_tgt_poll_group_000", 00:13:08.819 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:13:08.819 "listen_address": { 00:13:08.819 "trtype": "TCP", 00:13:08.819 "adrfam": "IPv4", 00:13:08.819 "traddr": "10.0.0.3", 00:13:08.819 "trsvcid": "4420" 00:13:08.819 }, 00:13:08.819 "peer_address": { 00:13:08.819 "trtype": "TCP", 00:13:08.819 "adrfam": "IPv4", 00:13:08.819 "traddr": "10.0.0.1", 00:13:08.819 "trsvcid": "57770" 00:13:08.819 }, 00:13:08.819 "auth": { 00:13:08.819 "state": "completed", 00:13:08.819 "digest": "sha512", 00:13:08.819 "dhgroup": "ffdhe3072" 00:13:08.819 } 00:13:08.819 } 00:13:08.819 ]' 00:13:08.819 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:08.819 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:08.819 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:08.819 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:08.819 12:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:08.819 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:08.819 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:08.819 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:09.388 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjljZjJhY2JhNzM1ZDY5NmRlN2MzYTg0YTk5YjkyMmNmNTdlOGZmODEzYzlhYWY5CxoIFA==: --dhchap-ctrl-secret DHHC-1:01:ODliMTgxYzlkZWU2MTg3ZmI1MDJmYWQ5ZDA3ZTdkZGKf3wvV: 00:13:09.388 12:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:02:NjljZjJhY2JhNzM1ZDY5NmRlN2MzYTg0YTk5YjkyMmNmNTdlOGZmODEzYzlhYWY5CxoIFA==: --dhchap-ctrl-secret DHHC-1:01:ODliMTgxYzlkZWU2MTg3ZmI1MDJmYWQ5ZDA3ZTdkZGKf3wvV: 00:13:09.956 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:09.956 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:09.956 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:13:09.956 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.956 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.956 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.956 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:09.956 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:09.956 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:10.215 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:13:10.215 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:10.215 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:10.215 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:10.215 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:10.215 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:10.215 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key3 00:13:10.215 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.215 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.215 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.215 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:10.215 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:10.215 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:10.783 00:13:10.783 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:10.783 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:10.783 12:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:10.783 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:10.783 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:10.783 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.783 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.783 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.783 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:10.783 { 00:13:10.783 "cntlid": 119, 00:13:10.783 "qid": 0, 00:13:10.783 "state": "enabled", 00:13:10.783 "thread": "nvmf_tgt_poll_group_000", 00:13:10.783 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:13:10.783 "listen_address": { 00:13:10.783 "trtype": "TCP", 00:13:10.783 "adrfam": "IPv4", 00:13:10.783 "traddr": "10.0.0.3", 00:13:10.783 "trsvcid": "4420" 00:13:10.783 }, 00:13:10.783 "peer_address": { 00:13:10.783 "trtype": "TCP", 00:13:10.783 "adrfam": "IPv4", 00:13:10.783 "traddr": "10.0.0.1", 00:13:10.783 "trsvcid": "57800" 00:13:10.783 }, 00:13:10.783 "auth": { 00:13:10.783 "state": "completed", 00:13:10.783 "digest": "sha512", 00:13:10.783 "dhgroup": "ffdhe3072" 00:13:10.783 } 00:13:10.783 } 00:13:10.783 ]' 00:13:10.783 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:11.042 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:11.042 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:11.043 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:11.043 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:11.043 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:11.043 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:11.043 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:11.301 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDFjMDM0MzY3ZWNiN2NiMzZkNWNhMzZiMmI4NDA0YmI0ZGEwNmVkNmUxMmM3MzEwOGQ0N2NlODZhMmIwOTA0YodrmJs=: 00:13:11.301 12:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:03:ZDFjMDM0MzY3ZWNiN2NiMzZkNWNhMzZiMmI4NDA0YmI0ZGEwNmVkNmUxMmM3MzEwOGQ0N2NlODZhMmIwOTA0YodrmJs=: 00:13:11.869 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:11.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:11.869 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:13:11.869 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.869 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.869 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.869 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:11.869 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:11.869 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:11.869 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:12.129 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:13:12.129 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:12.129 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:12.129 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:12.129 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:12.129 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.129 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:12.129 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.129 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.129 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.129 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:12.129 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:12.129 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:12.697 00:13:12.697 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:12.697 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:12.697 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:12.697 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:12.957 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:12.957 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.957 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.957 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.957 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:12.957 { 00:13:12.957 "cntlid": 121, 00:13:12.957 "qid": 0, 00:13:12.957 "state": "enabled", 00:13:12.957 "thread": "nvmf_tgt_poll_group_000", 00:13:12.957 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:13:12.957 "listen_address": { 00:13:12.957 "trtype": "TCP", 00:13:12.957 "adrfam": "IPv4", 00:13:12.957 "traddr": "10.0.0.3", 00:13:12.957 "trsvcid": "4420" 00:13:12.957 }, 00:13:12.957 "peer_address": { 00:13:12.957 "trtype": "TCP", 00:13:12.957 "adrfam": "IPv4", 00:13:12.957 "traddr": "10.0.0.1", 00:13:12.957 "trsvcid": "57822" 00:13:12.957 }, 00:13:12.957 "auth": { 00:13:12.957 "state": "completed", 00:13:12.957 "digest": "sha512", 00:13:12.957 "dhgroup": "ffdhe4096" 00:13:12.957 } 00:13:12.957 } 00:13:12.957 ]' 00:13:12.957 12:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:12.957 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:12.957 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:12.957 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:12.957 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:12.957 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:12.957 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:12.957 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:13.216 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWY1ZWNiNmFmMjJjOWUzYzY2MTdmZDQwZDZjYzRhMGRjYTJjY2JkZDZhY2RmNDcxkQB06A==: --dhchap-ctrl-secret DHHC-1:03:ODBhZmIzNDg0ZDRkOTRhZDExMjBlMjA1M2NiNWMxZjA0YzM5ZDVjYjUyNTg2ZTJjY2EzZDRhNjNlYjAyYWQ4Y+Ou8xQ=: 00:13:13.216 12:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:00:YWY1ZWNiNmFmMjJjOWUzYzY2MTdmZDQwZDZjYzRhMGRjYTJjY2JkZDZhY2RmNDcxkQB06A==: --dhchap-ctrl-secret DHHC-1:03:ODBhZmIzNDg0ZDRkOTRhZDExMjBlMjA1M2NiNWMxZjA0YzM5ZDVjYjUyNTg2ZTJjY2EzZDRhNjNlYjAyYWQ4Y+Ou8xQ=: 00:13:14.151 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:14.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:14.151 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:13:14.151 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.151 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.151 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.151 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:14.151 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:14.151 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:14.151 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:13:14.151 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:14.151 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:14.151 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:14.151 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:14.151 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:14.151 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:14.151 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.151 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.151 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.151 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:14.152 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:14.152 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:14.718 00:13:14.718 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:14.718 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:14.718 12:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:14.977 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:14.977 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:14.977 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.977 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.977 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.977 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:14.977 { 00:13:14.977 "cntlid": 123, 00:13:14.977 "qid": 0, 00:13:14.977 "state": "enabled", 00:13:14.977 "thread": "nvmf_tgt_poll_group_000", 00:13:14.977 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:13:14.977 "listen_address": { 00:13:14.977 "trtype": "TCP", 00:13:14.977 "adrfam": "IPv4", 00:13:14.977 "traddr": "10.0.0.3", 00:13:14.977 "trsvcid": "4420" 00:13:14.977 }, 00:13:14.977 "peer_address": { 00:13:14.977 "trtype": "TCP", 00:13:14.977 "adrfam": "IPv4", 00:13:14.977 "traddr": "10.0.0.1", 00:13:14.977 "trsvcid": "37360" 00:13:14.977 }, 00:13:14.977 "auth": { 00:13:14.977 "state": "completed", 00:13:14.977 "digest": "sha512", 00:13:14.977 "dhgroup": "ffdhe4096" 00:13:14.977 } 00:13:14.977 } 00:13:14.977 ]' 00:13:14.977 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:14.977 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:14.977 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:14.977 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:14.977 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:14.977 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:14.977 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:14.977 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:15.546 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY5NDg2NTQzZjY1M2NhMGY0MTI4N2RmYTVlOGFiYmNRMQ8f: --dhchap-ctrl-secret DHHC-1:02:ZDBiNGU1YWY4ZDJlOWJhZTBmNDk5MjdkMDYzY2RjZWE1NTNhOGZiODBhYjBkN2NhrXUOSg==: 00:13:15.546 12:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:01:YTY5NDg2NTQzZjY1M2NhMGY0MTI4N2RmYTVlOGFiYmNRMQ8f: --dhchap-ctrl-secret DHHC-1:02:ZDBiNGU1YWY4ZDJlOWJhZTBmNDk5MjdkMDYzY2RjZWE1NTNhOGZiODBhYjBkN2NhrXUOSg==: 00:13:16.114 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:16.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:16.114 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:13:16.114 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.114 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.114 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.114 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:16.114 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:16.114 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:16.373 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:13:16.373 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:16.373 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:16.373 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:16.373 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:16.373 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:16.373 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:16.373 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.373 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.373 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.373 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:16.374 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:16.374 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:16.633 00:13:16.633 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:16.633 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:16.633 12:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:16.900 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:16.900 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:16.900 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.900 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.900 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.900 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:16.900 { 00:13:16.900 "cntlid": 125, 00:13:16.900 "qid": 0, 00:13:16.900 "state": "enabled", 00:13:16.900 "thread": "nvmf_tgt_poll_group_000", 00:13:16.900 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:13:16.900 "listen_address": { 00:13:16.900 "trtype": "TCP", 00:13:16.900 "adrfam": "IPv4", 00:13:16.900 "traddr": "10.0.0.3", 00:13:16.900 "trsvcid": "4420" 00:13:16.900 }, 00:13:16.900 "peer_address": { 00:13:16.900 "trtype": "TCP", 00:13:16.900 "adrfam": "IPv4", 00:13:16.900 "traddr": "10.0.0.1", 00:13:16.900 "trsvcid": "37396" 00:13:16.900 }, 00:13:16.900 "auth": { 00:13:16.900 "state": "completed", 00:13:16.900 "digest": "sha512", 00:13:16.900 "dhgroup": "ffdhe4096" 00:13:16.900 } 00:13:16.900 } 00:13:16.900 ]' 00:13:16.900 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:16.900 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:16.900 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:16.900 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:16.900 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:17.173 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:17.173 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.174 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:17.433 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjljZjJhY2JhNzM1ZDY5NmRlN2MzYTg0YTk5YjkyMmNmNTdlOGZmODEzYzlhYWY5CxoIFA==: --dhchap-ctrl-secret DHHC-1:01:ODliMTgxYzlkZWU2MTg3ZmI1MDJmYWQ5ZDA3ZTdkZGKf3wvV: 00:13:17.433 12:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:02:NjljZjJhY2JhNzM1ZDY5NmRlN2MzYTg0YTk5YjkyMmNmNTdlOGZmODEzYzlhYWY5CxoIFA==: --dhchap-ctrl-secret DHHC-1:01:ODliMTgxYzlkZWU2MTg3ZmI1MDJmYWQ5ZDA3ZTdkZGKf3wvV: 00:13:18.002 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:18.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:18.002 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:13:18.002 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.002 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.002 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.002 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:18.002 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:18.002 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:18.261 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:13:18.261 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:18.261 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:18.261 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:18.261 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:18.261 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:18.261 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key3 00:13:18.261 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.261 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.261 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.261 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:18.261 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:18.261 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:18.829 00:13:18.829 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:18.829 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:18.829 12:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:19.088 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.088 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:19.088 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.088 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.088 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.088 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:19.088 { 00:13:19.088 "cntlid": 127, 00:13:19.088 "qid": 0, 00:13:19.088 "state": "enabled", 00:13:19.088 "thread": "nvmf_tgt_poll_group_000", 00:13:19.088 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:13:19.088 "listen_address": { 00:13:19.088 "trtype": "TCP", 00:13:19.088 "adrfam": "IPv4", 00:13:19.088 "traddr": "10.0.0.3", 00:13:19.088 "trsvcid": "4420" 00:13:19.088 }, 00:13:19.088 "peer_address": { 00:13:19.088 "trtype": "TCP", 00:13:19.088 "adrfam": "IPv4", 00:13:19.088 "traddr": "10.0.0.1", 00:13:19.088 "trsvcid": "37420" 00:13:19.088 }, 00:13:19.088 "auth": { 00:13:19.088 "state": "completed", 00:13:19.088 "digest": "sha512", 00:13:19.088 "dhgroup": "ffdhe4096" 00:13:19.088 } 00:13:19.088 } 00:13:19.088 ]' 00:13:19.089 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:19.089 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:19.089 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:19.089 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:19.089 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:19.089 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:19.089 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:19.089 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:19.658 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDFjMDM0MzY3ZWNiN2NiMzZkNWNhMzZiMmI4NDA0YmI0ZGEwNmVkNmUxMmM3MzEwOGQ0N2NlODZhMmIwOTA0YodrmJs=: 00:13:19.658 12:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:03:ZDFjMDM0MzY3ZWNiN2NiMzZkNWNhMzZiMmI4NDA0YmI0ZGEwNmVkNmUxMmM3MzEwOGQ0N2NlODZhMmIwOTA0YodrmJs=: 00:13:20.226 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:20.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:20.226 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:13:20.226 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.226 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.226 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.226 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:20.226 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:20.226 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:20.226 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:20.486 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:13:20.486 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:20.486 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:20.486 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:20.486 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:20.486 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:20.486 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:20.486 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.486 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.486 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.486 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:20.486 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:20.486 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:20.745 00:13:20.745 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:20.745 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:20.745 12:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:21.003 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:21.003 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:21.003 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.003 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.003 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.003 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:21.003 { 00:13:21.003 "cntlid": 129, 00:13:21.003 "qid": 0, 00:13:21.003 "state": "enabled", 00:13:21.003 "thread": "nvmf_tgt_poll_group_000", 00:13:21.004 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:13:21.004 "listen_address": { 00:13:21.004 "trtype": "TCP", 00:13:21.004 "adrfam": "IPv4", 00:13:21.004 "traddr": "10.0.0.3", 00:13:21.004 "trsvcid": "4420" 00:13:21.004 }, 00:13:21.004 "peer_address": { 00:13:21.004 "trtype": "TCP", 00:13:21.004 "adrfam": "IPv4", 00:13:21.004 "traddr": "10.0.0.1", 00:13:21.004 "trsvcid": "37436" 00:13:21.004 }, 00:13:21.004 "auth": { 00:13:21.004 "state": "completed", 00:13:21.004 "digest": "sha512", 00:13:21.004 "dhgroup": "ffdhe6144" 00:13:21.004 } 00:13:21.004 } 00:13:21.004 ]' 00:13:21.004 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:21.264 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:21.264 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:21.265 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:21.265 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:21.265 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:21.265 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:21.265 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:21.522 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWY1ZWNiNmFmMjJjOWUzYzY2MTdmZDQwZDZjYzRhMGRjYTJjY2JkZDZhY2RmNDcxkQB06A==: --dhchap-ctrl-secret DHHC-1:03:ODBhZmIzNDg0ZDRkOTRhZDExMjBlMjA1M2NiNWMxZjA0YzM5ZDVjYjUyNTg2ZTJjY2EzZDRhNjNlYjAyYWQ4Y+Ou8xQ=: 00:13:21.522 12:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:00:YWY1ZWNiNmFmMjJjOWUzYzY2MTdmZDQwZDZjYzRhMGRjYTJjY2JkZDZhY2RmNDcxkQB06A==: --dhchap-ctrl-secret DHHC-1:03:ODBhZmIzNDg0ZDRkOTRhZDExMjBlMjA1M2NiNWMxZjA0YzM5ZDVjYjUyNTg2ZTJjY2EzZDRhNjNlYjAyYWQ4Y+Ou8xQ=: 00:13:22.459 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:22.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:22.459 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:13:22.459 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.459 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.459 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.459 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:22.459 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:22.459 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:22.459 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:13:22.459 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:22.459 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:22.459 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:22.459 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:22.459 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:22.459 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:22.459 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.459 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.459 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.459 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:22.459 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:22.459 12:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:23.027 00:13:23.027 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:23.027 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:23.027 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:23.286 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:23.286 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:23.286 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.286 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.286 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.286 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:23.286 { 00:13:23.286 "cntlid": 131, 00:13:23.286 "qid": 0, 00:13:23.286 "state": "enabled", 00:13:23.286 "thread": "nvmf_tgt_poll_group_000", 00:13:23.286 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:13:23.286 "listen_address": { 00:13:23.286 "trtype": "TCP", 00:13:23.286 "adrfam": "IPv4", 00:13:23.286 "traddr": "10.0.0.3", 00:13:23.286 "trsvcid": "4420" 00:13:23.286 }, 00:13:23.286 "peer_address": { 00:13:23.286 "trtype": "TCP", 00:13:23.286 "adrfam": "IPv4", 00:13:23.286 "traddr": "10.0.0.1", 00:13:23.286 "trsvcid": "37452" 00:13:23.286 }, 00:13:23.286 "auth": { 00:13:23.286 "state": "completed", 00:13:23.286 "digest": "sha512", 00:13:23.286 "dhgroup": "ffdhe6144" 00:13:23.286 } 00:13:23.286 } 00:13:23.286 ]' 00:13:23.286 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:23.286 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:23.286 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:23.545 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:23.545 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:23.545 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:23.545 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:23.546 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:23.804 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY5NDg2NTQzZjY1M2NhMGY0MTI4N2RmYTVlOGFiYmNRMQ8f: --dhchap-ctrl-secret DHHC-1:02:ZDBiNGU1YWY4ZDJlOWJhZTBmNDk5MjdkMDYzY2RjZWE1NTNhOGZiODBhYjBkN2NhrXUOSg==: 00:13:23.804 12:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:01:YTY5NDg2NTQzZjY1M2NhMGY0MTI4N2RmYTVlOGFiYmNRMQ8f: --dhchap-ctrl-secret DHHC-1:02:ZDBiNGU1YWY4ZDJlOWJhZTBmNDk5MjdkMDYzY2RjZWE1NTNhOGZiODBhYjBkN2NhrXUOSg==: 00:13:24.739 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:24.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:24.739 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:13:24.739 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.739 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.739 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.739 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:24.739 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:24.739 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:24.739 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:13:24.739 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:24.739 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:24.739 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:24.739 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:24.739 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:24.739 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:24.739 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.739 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.739 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.739 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:24.739 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:24.739 12:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:25.307 00:13:25.307 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:25.307 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:25.307 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:25.566 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:25.566 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:25.566 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.566 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.566 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.566 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:25.566 { 00:13:25.566 "cntlid": 133, 00:13:25.566 "qid": 0, 00:13:25.566 "state": "enabled", 00:13:25.566 "thread": "nvmf_tgt_poll_group_000", 00:13:25.566 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:13:25.566 "listen_address": { 00:13:25.566 "trtype": "TCP", 00:13:25.566 "adrfam": "IPv4", 00:13:25.566 "traddr": "10.0.0.3", 00:13:25.566 "trsvcid": "4420" 00:13:25.566 }, 00:13:25.566 "peer_address": { 00:13:25.566 "trtype": "TCP", 00:13:25.566 "adrfam": "IPv4", 00:13:25.566 "traddr": "10.0.0.1", 00:13:25.566 "trsvcid": "47088" 00:13:25.566 }, 00:13:25.566 "auth": { 00:13:25.566 "state": "completed", 00:13:25.566 "digest": "sha512", 00:13:25.566 "dhgroup": "ffdhe6144" 00:13:25.566 } 00:13:25.566 } 00:13:25.566 ]' 00:13:25.566 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:25.566 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:25.566 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:25.825 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:25.825 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:25.825 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:25.825 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:25.825 12:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:26.083 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjljZjJhY2JhNzM1ZDY5NmRlN2MzYTg0YTk5YjkyMmNmNTdlOGZmODEzYzlhYWY5CxoIFA==: --dhchap-ctrl-secret DHHC-1:01:ODliMTgxYzlkZWU2MTg3ZmI1MDJmYWQ5ZDA3ZTdkZGKf3wvV: 00:13:26.083 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:02:NjljZjJhY2JhNzM1ZDY5NmRlN2MzYTg0YTk5YjkyMmNmNTdlOGZmODEzYzlhYWY5CxoIFA==: --dhchap-ctrl-secret DHHC-1:01:ODliMTgxYzlkZWU2MTg3ZmI1MDJmYWQ5ZDA3ZTdkZGKf3wvV: 00:13:26.652 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:26.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:26.652 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:13:26.652 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.652 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.652 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.652 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:26.652 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:26.652 12:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:26.911 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:13:26.911 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:26.911 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:26.911 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:26.911 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:26.911 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:26.911 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key3 00:13:26.911 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.911 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.911 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.911 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:26.911 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:26.911 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:27.478 00:13:27.478 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:27.478 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:27.478 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:27.737 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:27.737 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:27.737 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.737 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.737 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.737 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:27.737 { 00:13:27.737 "cntlid": 135, 00:13:27.737 "qid": 0, 00:13:27.737 "state": "enabled", 00:13:27.737 "thread": "nvmf_tgt_poll_group_000", 00:13:27.737 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:13:27.737 "listen_address": { 00:13:27.737 "trtype": "TCP", 00:13:27.737 "adrfam": "IPv4", 00:13:27.737 "traddr": "10.0.0.3", 00:13:27.737 "trsvcid": "4420" 00:13:27.737 }, 00:13:27.737 "peer_address": { 00:13:27.737 "trtype": "TCP", 00:13:27.737 "adrfam": "IPv4", 00:13:27.737 "traddr": "10.0.0.1", 00:13:27.737 "trsvcid": "47126" 00:13:27.737 }, 00:13:27.737 "auth": { 00:13:27.737 "state": "completed", 00:13:27.737 "digest": "sha512", 00:13:27.737 "dhgroup": "ffdhe6144" 00:13:27.737 } 00:13:27.737 } 00:13:27.737 ]' 00:13:27.737 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:27.737 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:27.737 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:27.737 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:27.737 12:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:27.997 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:27.997 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:27.997 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:27.997 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDFjMDM0MzY3ZWNiN2NiMzZkNWNhMzZiMmI4NDA0YmI0ZGEwNmVkNmUxMmM3MzEwOGQ0N2NlODZhMmIwOTA0YodrmJs=: 00:13:27.997 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:03:ZDFjMDM0MzY3ZWNiN2NiMzZkNWNhMzZiMmI4NDA0YmI0ZGEwNmVkNmUxMmM3MzEwOGQ0N2NlODZhMmIwOTA0YodrmJs=: 00:13:28.933 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:28.933 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:28.933 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:13:28.933 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.933 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.933 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.933 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:28.933 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:28.933 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:28.933 12:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:28.933 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:13:28.933 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:28.933 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:28.933 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:28.933 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:28.933 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:28.933 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:28.933 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.933 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.933 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.933 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:28.933 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:28.933 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:29.523 00:13:29.523 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:29.523 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:29.523 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:29.783 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:29.783 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:29.783 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.783 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.783 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.783 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:29.783 { 00:13:29.783 "cntlid": 137, 00:13:29.783 "qid": 0, 00:13:29.783 "state": "enabled", 00:13:29.783 "thread": "nvmf_tgt_poll_group_000", 00:13:29.783 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:13:29.783 "listen_address": { 00:13:29.783 "trtype": "TCP", 00:13:29.783 "adrfam": "IPv4", 00:13:29.783 "traddr": "10.0.0.3", 00:13:29.783 "trsvcid": "4420" 00:13:29.783 }, 00:13:29.783 "peer_address": { 00:13:29.783 "trtype": "TCP", 00:13:29.783 "adrfam": "IPv4", 00:13:29.783 "traddr": "10.0.0.1", 00:13:29.783 "trsvcid": "47154" 00:13:29.783 }, 00:13:29.783 "auth": { 00:13:29.783 "state": "completed", 00:13:29.783 "digest": "sha512", 00:13:29.783 "dhgroup": "ffdhe8192" 00:13:29.783 } 00:13:29.783 } 00:13:29.783 ]' 00:13:29.783 12:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:29.783 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:29.783 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:30.042 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:30.042 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:30.042 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:30.042 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:30.042 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:30.302 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWY1ZWNiNmFmMjJjOWUzYzY2MTdmZDQwZDZjYzRhMGRjYTJjY2JkZDZhY2RmNDcxkQB06A==: --dhchap-ctrl-secret DHHC-1:03:ODBhZmIzNDg0ZDRkOTRhZDExMjBlMjA1M2NiNWMxZjA0YzM5ZDVjYjUyNTg2ZTJjY2EzZDRhNjNlYjAyYWQ4Y+Ou8xQ=: 00:13:30.302 12:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:00:YWY1ZWNiNmFmMjJjOWUzYzY2MTdmZDQwZDZjYzRhMGRjYTJjY2JkZDZhY2RmNDcxkQB06A==: --dhchap-ctrl-secret DHHC-1:03:ODBhZmIzNDg0ZDRkOTRhZDExMjBlMjA1M2NiNWMxZjA0YzM5ZDVjYjUyNTg2ZTJjY2EzZDRhNjNlYjAyYWQ4Y+Ou8xQ=: 00:13:30.870 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:30.870 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:30.870 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:13:30.870 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.870 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.870 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.870 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:30.870 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:30.870 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:31.130 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:13:31.130 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:31.130 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:31.130 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:31.130 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:31.130 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:31.130 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:31.130 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.130 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.130 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.130 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:31.130 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:31.130 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:32.067 00:13:32.067 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:32.067 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:32.067 12:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:32.067 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:32.067 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:32.067 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.067 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.067 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.067 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:32.067 { 00:13:32.067 "cntlid": 139, 00:13:32.067 "qid": 0, 00:13:32.067 "state": "enabled", 00:13:32.067 "thread": "nvmf_tgt_poll_group_000", 00:13:32.067 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:13:32.067 "listen_address": { 00:13:32.067 "trtype": "TCP", 00:13:32.067 "adrfam": "IPv4", 00:13:32.067 "traddr": "10.0.0.3", 00:13:32.067 "trsvcid": "4420" 00:13:32.067 }, 00:13:32.067 "peer_address": { 00:13:32.067 "trtype": "TCP", 00:13:32.067 "adrfam": "IPv4", 00:13:32.067 "traddr": "10.0.0.1", 00:13:32.067 "trsvcid": "47182" 00:13:32.067 }, 00:13:32.067 "auth": { 00:13:32.067 "state": "completed", 00:13:32.067 "digest": "sha512", 00:13:32.067 "dhgroup": "ffdhe8192" 00:13:32.067 } 00:13:32.068 } 00:13:32.068 ]' 00:13:32.068 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:32.068 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:32.068 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:32.327 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:32.327 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:32.327 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:32.327 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:32.327 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:32.587 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY5NDg2NTQzZjY1M2NhMGY0MTI4N2RmYTVlOGFiYmNRMQ8f: --dhchap-ctrl-secret DHHC-1:02:ZDBiNGU1YWY4ZDJlOWJhZTBmNDk5MjdkMDYzY2RjZWE1NTNhOGZiODBhYjBkN2NhrXUOSg==: 00:13:32.587 12:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:01:YTY5NDg2NTQzZjY1M2NhMGY0MTI4N2RmYTVlOGFiYmNRMQ8f: --dhchap-ctrl-secret DHHC-1:02:ZDBiNGU1YWY4ZDJlOWJhZTBmNDk5MjdkMDYzY2RjZWE1NTNhOGZiODBhYjBkN2NhrXUOSg==: 00:13:33.155 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:33.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:33.155 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:13:33.155 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.155 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.155 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.155 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:33.155 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:33.155 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:33.414 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:13:33.414 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:33.414 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:33.414 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:33.414 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:33.414 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:33.414 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:33.414 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.414 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.414 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.414 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:33.414 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:33.414 12:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:33.982 00:13:34.240 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:34.240 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:34.240 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:34.240 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:34.240 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:34.240 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.240 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.499 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.499 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:34.499 { 00:13:34.499 "cntlid": 141, 00:13:34.499 "qid": 0, 00:13:34.499 "state": "enabled", 00:13:34.499 "thread": "nvmf_tgt_poll_group_000", 00:13:34.499 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:13:34.499 "listen_address": { 00:13:34.499 "trtype": "TCP", 00:13:34.499 "adrfam": "IPv4", 00:13:34.499 "traddr": "10.0.0.3", 00:13:34.499 "trsvcid": "4420" 00:13:34.499 }, 00:13:34.499 "peer_address": { 00:13:34.499 "trtype": "TCP", 00:13:34.499 "adrfam": "IPv4", 00:13:34.499 "traddr": "10.0.0.1", 00:13:34.499 "trsvcid": "47212" 00:13:34.499 }, 00:13:34.499 "auth": { 00:13:34.499 "state": "completed", 00:13:34.499 "digest": "sha512", 00:13:34.499 "dhgroup": "ffdhe8192" 00:13:34.499 } 00:13:34.499 } 00:13:34.499 ]' 00:13:34.499 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:34.499 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:34.499 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:34.499 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:34.499 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:34.499 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:34.499 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:34.499 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:34.758 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjljZjJhY2JhNzM1ZDY5NmRlN2MzYTg0YTk5YjkyMmNmNTdlOGZmODEzYzlhYWY5CxoIFA==: --dhchap-ctrl-secret DHHC-1:01:ODliMTgxYzlkZWU2MTg3ZmI1MDJmYWQ5ZDA3ZTdkZGKf3wvV: 00:13:34.758 12:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:02:NjljZjJhY2JhNzM1ZDY5NmRlN2MzYTg0YTk5YjkyMmNmNTdlOGZmODEzYzlhYWY5CxoIFA==: --dhchap-ctrl-secret DHHC-1:01:ODliMTgxYzlkZWU2MTg3ZmI1MDJmYWQ5ZDA3ZTdkZGKf3wvV: 00:13:35.325 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:35.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:35.325 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:13:35.325 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.325 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.325 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.325 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:35.325 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:35.325 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:35.584 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:13:35.584 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:35.584 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:35.584 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:35.584 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:35.584 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:35.584 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key3 00:13:35.584 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.584 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.584 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.584 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:35.584 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:35.584 12:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:36.152 00:13:36.152 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:36.152 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:36.152 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:36.720 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:36.720 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:36.720 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.720 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.720 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.720 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:36.720 { 00:13:36.720 "cntlid": 143, 00:13:36.720 "qid": 0, 00:13:36.720 "state": "enabled", 00:13:36.720 "thread": "nvmf_tgt_poll_group_000", 00:13:36.720 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:13:36.720 "listen_address": { 00:13:36.720 "trtype": "TCP", 00:13:36.720 "adrfam": "IPv4", 00:13:36.720 "traddr": "10.0.0.3", 00:13:36.720 "trsvcid": "4420" 00:13:36.720 }, 00:13:36.720 "peer_address": { 00:13:36.720 "trtype": "TCP", 00:13:36.720 "adrfam": "IPv4", 00:13:36.720 "traddr": "10.0.0.1", 00:13:36.720 "trsvcid": "44016" 00:13:36.720 }, 00:13:36.720 "auth": { 00:13:36.720 "state": "completed", 00:13:36.720 "digest": "sha512", 00:13:36.720 "dhgroup": "ffdhe8192" 00:13:36.720 } 00:13:36.720 } 00:13:36.720 ]' 00:13:36.720 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:36.720 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:36.720 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:36.720 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:36.720 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:36.720 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:36.720 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:36.720 12:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:36.979 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDFjMDM0MzY3ZWNiN2NiMzZkNWNhMzZiMmI4NDA0YmI0ZGEwNmVkNmUxMmM3MzEwOGQ0N2NlODZhMmIwOTA0YodrmJs=: 00:13:36.979 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:03:ZDFjMDM0MzY3ZWNiN2NiMzZkNWNhMzZiMmI4NDA0YmI0ZGEwNmVkNmUxMmM3MzEwOGQ0N2NlODZhMmIwOTA0YodrmJs=: 00:13:37.547 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:37.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:37.806 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:13:37.806 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.806 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.806 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.806 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:13:37.806 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:13:37.806 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:13:37.806 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:37.806 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:37.806 12:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:38.066 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:13:38.066 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:38.066 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:38.066 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:38.066 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:38.066 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:38.066 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:38.066 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.066 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.066 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.066 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:38.066 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:38.066 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:38.635 00:13:38.635 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:38.635 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:38.635 12:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:39.204 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:39.204 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:39.204 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.204 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.204 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.204 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:39.204 { 00:13:39.204 "cntlid": 145, 00:13:39.204 "qid": 0, 00:13:39.204 "state": "enabled", 00:13:39.204 "thread": "nvmf_tgt_poll_group_000", 00:13:39.204 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:13:39.204 "listen_address": { 00:13:39.204 "trtype": "TCP", 00:13:39.204 "adrfam": "IPv4", 00:13:39.204 "traddr": "10.0.0.3", 00:13:39.204 "trsvcid": "4420" 00:13:39.204 }, 00:13:39.204 "peer_address": { 00:13:39.204 "trtype": "TCP", 00:13:39.204 "adrfam": "IPv4", 00:13:39.204 "traddr": "10.0.0.1", 00:13:39.204 "trsvcid": "44044" 00:13:39.204 }, 00:13:39.204 "auth": { 00:13:39.204 "state": "completed", 00:13:39.204 "digest": "sha512", 00:13:39.204 "dhgroup": "ffdhe8192" 00:13:39.204 } 00:13:39.204 } 00:13:39.204 ]' 00:13:39.204 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:39.204 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:39.204 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:39.204 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:39.204 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:39.204 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:39.204 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:39.204 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:39.463 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWY1ZWNiNmFmMjJjOWUzYzY2MTdmZDQwZDZjYzRhMGRjYTJjY2JkZDZhY2RmNDcxkQB06A==: --dhchap-ctrl-secret DHHC-1:03:ODBhZmIzNDg0ZDRkOTRhZDExMjBlMjA1M2NiNWMxZjA0YzM5ZDVjYjUyNTg2ZTJjY2EzZDRhNjNlYjAyYWQ4Y+Ou8xQ=: 00:13:39.463 12:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:00:YWY1ZWNiNmFmMjJjOWUzYzY2MTdmZDQwZDZjYzRhMGRjYTJjY2JkZDZhY2RmNDcxkQB06A==: --dhchap-ctrl-secret DHHC-1:03:ODBhZmIzNDg0ZDRkOTRhZDExMjBlMjA1M2NiNWMxZjA0YzM5ZDVjYjUyNTg2ZTJjY2EzZDRhNjNlYjAyYWQ4Y+Ou8xQ=: 00:13:40.032 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:40.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:40.032 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:13:40.032 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.032 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.032 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.032 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key1 00:13:40.032 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.032 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.032 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.032 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:13:40.032 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:40.032 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:13:40.032 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:40.032 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:40.032 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:40.032 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:40.033 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:13:40.033 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:13:40.033 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:13:40.601 request: 00:13:40.601 { 00:13:40.601 "name": "nvme0", 00:13:40.601 "trtype": "tcp", 00:13:40.601 "traddr": "10.0.0.3", 00:13:40.601 "adrfam": "ipv4", 00:13:40.601 "trsvcid": "4420", 00:13:40.601 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:40.601 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:13:40.601 "prchk_reftag": false, 00:13:40.601 "prchk_guard": false, 00:13:40.601 "hdgst": false, 00:13:40.601 "ddgst": false, 00:13:40.601 "dhchap_key": "key2", 00:13:40.601 "allow_unrecognized_csi": false, 00:13:40.601 "method": "bdev_nvme_attach_controller", 00:13:40.601 "req_id": 1 00:13:40.601 } 00:13:40.601 Got JSON-RPC error response 00:13:40.601 response: 00:13:40.601 { 00:13:40.601 "code": -5, 00:13:40.601 "message": "Input/output error" 00:13:40.601 } 00:13:40.906 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:40.906 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:40.906 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:40.906 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:40.906 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:13:40.906 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.906 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.906 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.906 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:40.906 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.906 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.906 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.906 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:40.906 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:40.906 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:40.906 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:40.906 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:40.906 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:40.907 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:40.907 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:40.907 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:40.907 12:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:41.475 request: 00:13:41.475 { 00:13:41.475 "name": "nvme0", 00:13:41.475 "trtype": "tcp", 00:13:41.475 "traddr": "10.0.0.3", 00:13:41.475 "adrfam": "ipv4", 00:13:41.475 "trsvcid": "4420", 00:13:41.475 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:41.475 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:13:41.475 "prchk_reftag": false, 00:13:41.475 "prchk_guard": false, 00:13:41.475 "hdgst": false, 00:13:41.475 "ddgst": false, 00:13:41.475 "dhchap_key": "key1", 00:13:41.475 "dhchap_ctrlr_key": "ckey2", 00:13:41.475 "allow_unrecognized_csi": false, 00:13:41.475 "method": "bdev_nvme_attach_controller", 00:13:41.475 "req_id": 1 00:13:41.475 } 00:13:41.475 Got JSON-RPC error response 00:13:41.475 response: 00:13:41.475 { 00:13:41.475 "code": -5, 00:13:41.475 "message": "Input/output error" 00:13:41.475 } 00:13:41.475 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:41.475 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:41.475 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:41.475 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:41.475 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:13:41.475 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.475 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.475 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.475 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key1 00:13:41.475 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.475 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.475 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.475 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:41.475 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:41.475 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:41.475 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:41.475 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:41.475 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:41.475 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:41.475 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:41.475 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:41.475 12:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:42.044 request: 00:13:42.044 { 00:13:42.044 "name": "nvme0", 00:13:42.044 "trtype": "tcp", 00:13:42.044 "traddr": "10.0.0.3", 00:13:42.044 "adrfam": "ipv4", 00:13:42.044 "trsvcid": "4420", 00:13:42.044 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:42.044 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:13:42.044 "prchk_reftag": false, 00:13:42.044 "prchk_guard": false, 00:13:42.044 "hdgst": false, 00:13:42.044 "ddgst": false, 00:13:42.044 "dhchap_key": "key1", 00:13:42.044 "dhchap_ctrlr_key": "ckey1", 00:13:42.044 "allow_unrecognized_csi": false, 00:13:42.044 "method": "bdev_nvme_attach_controller", 00:13:42.044 "req_id": 1 00:13:42.044 } 00:13:42.044 Got JSON-RPC error response 00:13:42.044 response: 00:13:42.044 { 00:13:42.044 "code": -5, 00:13:42.044 "message": "Input/output error" 00:13:42.044 } 00:13:42.044 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:42.044 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:42.044 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:42.044 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:42.044 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:13:42.044 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.044 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.044 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.044 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 80108 00:13:42.044 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 80108 ']' 00:13:42.044 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 80108 00:13:42.044 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:13:42.044 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:42.044 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80108 00:13:42.044 killing process with pid 80108 00:13:42.044 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:42.044 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:42.044 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80108' 00:13:42.044 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 80108 00:13:42.044 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 80108 00:13:42.044 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:13:42.044 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:13:42.044 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:42.044 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.044 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=83172 00:13:42.044 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:13:42.044 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 83172 00:13:42.044 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 83172 ']' 00:13:42.044 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.044 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:42.044 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.044 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:42.044 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.611 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:42.611 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:42.611 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:13:42.611 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:42.611 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.611 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:42.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.611 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:42.611 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 83172 00:13:42.611 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 83172 ']' 00:13:42.611 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.611 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:42.611 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.611 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:42.611 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.870 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:42.870 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:42.870 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:13:42.870 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.870 12:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.870 null0 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.rNc 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.56S ]] 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.56S 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.cxi 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.Yt3 ]] 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Yt3 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.6Z8 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Vtw ]] 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Vtw 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.mCR 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key3 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:42.870 12:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:43.807 nvme0n1 00:13:43.807 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:43.807 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:43.807 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:44.376 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:44.376 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:44.376 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.376 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.376 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.376 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:44.376 { 00:13:44.376 "cntlid": 1, 00:13:44.376 "qid": 0, 00:13:44.376 "state": "enabled", 00:13:44.376 "thread": "nvmf_tgt_poll_group_000", 00:13:44.376 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:13:44.376 "listen_address": { 00:13:44.376 "trtype": "TCP", 00:13:44.376 "adrfam": "IPv4", 00:13:44.376 "traddr": "10.0.0.3", 00:13:44.376 "trsvcid": "4420" 00:13:44.376 }, 00:13:44.376 "peer_address": { 00:13:44.376 "trtype": "TCP", 00:13:44.376 "adrfam": "IPv4", 00:13:44.376 "traddr": "10.0.0.1", 00:13:44.376 "trsvcid": "44108" 00:13:44.376 }, 00:13:44.376 "auth": { 00:13:44.376 "state": "completed", 00:13:44.376 "digest": "sha512", 00:13:44.376 "dhgroup": "ffdhe8192" 00:13:44.376 } 00:13:44.376 } 00:13:44.376 ]' 00:13:44.376 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:44.376 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:44.376 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:44.376 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:44.376 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:44.376 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:44.376 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:44.376 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:44.635 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDFjMDM0MzY3ZWNiN2NiMzZkNWNhMzZiMmI4NDA0YmI0ZGEwNmVkNmUxMmM3MzEwOGQ0N2NlODZhMmIwOTA0YodrmJs=: 00:13:44.635 12:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:03:ZDFjMDM0MzY3ZWNiN2NiMzZkNWNhMzZiMmI4NDA0YmI0ZGEwNmVkNmUxMmM3MzEwOGQ0N2NlODZhMmIwOTA0YodrmJs=: 00:13:45.573 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:45.573 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:45.573 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:13:45.573 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.573 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.573 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.573 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key3 00:13:45.573 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.573 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.573 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.573 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:13:45.573 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:13:45.573 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:13:45.573 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:45.573 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:13:45.573 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:45.573 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:45.573 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:45.573 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:45.573 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:45.573 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:45.574 12:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:45.833 request: 00:13:45.833 { 00:13:45.833 "name": "nvme0", 00:13:45.833 "trtype": "tcp", 00:13:45.833 "traddr": "10.0.0.3", 00:13:45.833 "adrfam": "ipv4", 00:13:45.833 "trsvcid": "4420", 00:13:45.833 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:45.833 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:13:45.833 "prchk_reftag": false, 00:13:45.833 "prchk_guard": false, 00:13:45.833 "hdgst": false, 00:13:45.833 "ddgst": false, 00:13:45.833 "dhchap_key": "key3", 00:13:45.833 "allow_unrecognized_csi": false, 00:13:45.833 "method": "bdev_nvme_attach_controller", 00:13:45.833 "req_id": 1 00:13:45.833 } 00:13:45.833 Got JSON-RPC error response 00:13:45.833 response: 00:13:45.833 { 00:13:45.833 "code": -5, 00:13:45.833 "message": "Input/output error" 00:13:45.833 } 00:13:45.833 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:45.833 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:45.833 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:45.833 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:45.833 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:13:45.833 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:13:45.833 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:45.833 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:46.402 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:13:46.402 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:46.402 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:13:46.402 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:46.402 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:46.402 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:46.402 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:46.402 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:46.402 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:46.402 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:46.402 request: 00:13:46.402 { 00:13:46.402 "name": "nvme0", 00:13:46.402 "trtype": "tcp", 00:13:46.402 "traddr": "10.0.0.3", 00:13:46.402 "adrfam": "ipv4", 00:13:46.402 "trsvcid": "4420", 00:13:46.402 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:46.402 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:13:46.402 "prchk_reftag": false, 00:13:46.402 "prchk_guard": false, 00:13:46.402 "hdgst": false, 00:13:46.402 "ddgst": false, 00:13:46.402 "dhchap_key": "key3", 00:13:46.402 "allow_unrecognized_csi": false, 00:13:46.402 "method": "bdev_nvme_attach_controller", 00:13:46.402 "req_id": 1 00:13:46.402 } 00:13:46.403 Got JSON-RPC error response 00:13:46.403 response: 00:13:46.403 { 00:13:46.403 "code": -5, 00:13:46.403 "message": "Input/output error" 00:13:46.403 } 00:13:46.661 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:46.661 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:46.661 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:46.661 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:46.661 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:13:46.661 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:13:46.661 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:13:46.661 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:46.662 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:46.662 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:46.662 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:13:46.662 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.662 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.662 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.662 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:13:46.662 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.662 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.662 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.662 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:46.662 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:46.662 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:46.662 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:46.920 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:46.920 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:46.921 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:46.921 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:46.921 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:46.921 12:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:47.180 request: 00:13:47.180 { 00:13:47.180 "name": "nvme0", 00:13:47.180 "trtype": "tcp", 00:13:47.180 "traddr": "10.0.0.3", 00:13:47.180 "adrfam": "ipv4", 00:13:47.180 "trsvcid": "4420", 00:13:47.180 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:47.180 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:13:47.180 "prchk_reftag": false, 00:13:47.180 "prchk_guard": false, 00:13:47.180 "hdgst": false, 00:13:47.180 "ddgst": false, 00:13:47.180 "dhchap_key": "key0", 00:13:47.180 "dhchap_ctrlr_key": "key1", 00:13:47.180 "allow_unrecognized_csi": false, 00:13:47.180 "method": "bdev_nvme_attach_controller", 00:13:47.180 "req_id": 1 00:13:47.180 } 00:13:47.180 Got JSON-RPC error response 00:13:47.180 response: 00:13:47.180 { 00:13:47.180 "code": -5, 00:13:47.180 "message": "Input/output error" 00:13:47.180 } 00:13:47.180 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:47.180 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:47.180 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:47.180 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:47.180 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:13:47.180 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:13:47.180 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:13:47.438 nvme0n1 00:13:47.438 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:13:47.438 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:13:47.438 12:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:48.018 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:48.018 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:48.018 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:48.018 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key1 00:13:48.018 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.018 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.018 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.018 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:13:48.018 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:48.018 12:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:49.398 nvme0n1 00:13:49.398 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:13:49.399 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:13:49.399 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:49.399 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:49.399 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:49.399 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.399 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.399 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.399 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:13:49.399 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:49.399 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:13:49.658 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:49.658 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NjljZjJhY2JhNzM1ZDY5NmRlN2MzYTg0YTk5YjkyMmNmNTdlOGZmODEzYzlhYWY5CxoIFA==: --dhchap-ctrl-secret DHHC-1:03:ZDFjMDM0MzY3ZWNiN2NiMzZkNWNhMzZiMmI4NDA0YmI0ZGEwNmVkNmUxMmM3MzEwOGQ0N2NlODZhMmIwOTA0YodrmJs=: 00:13:49.658 12:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid bae1b18f-cc14-461e-aa63-e888be1a2cc9 -l 0 --dhchap-secret DHHC-1:02:NjljZjJhY2JhNzM1ZDY5NmRlN2MzYTg0YTk5YjkyMmNmNTdlOGZmODEzYzlhYWY5CxoIFA==: --dhchap-ctrl-secret DHHC-1:03:ZDFjMDM0MzY3ZWNiN2NiMzZkNWNhMzZiMmI4NDA0YmI0ZGEwNmVkNmUxMmM3MzEwOGQ0N2NlODZhMmIwOTA0YodrmJs=: 00:13:50.227 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:13:50.227 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:13:50.227 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:13:50.227 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:13:50.227 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:13:50.227 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:13:50.227 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:13:50.227 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:50.227 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:50.486 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:13:50.486 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:50.486 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:13:50.486 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:50.486 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:50.486 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:50.486 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:50.486 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:13:50.486 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:50.486 12:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:51.054 request: 00:13:51.054 { 00:13:51.054 "name": "nvme0", 00:13:51.054 "trtype": "tcp", 00:13:51.054 "traddr": "10.0.0.3", 00:13:51.054 "adrfam": "ipv4", 00:13:51.054 "trsvcid": "4420", 00:13:51.054 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:51.054 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9", 00:13:51.054 "prchk_reftag": false, 00:13:51.054 "prchk_guard": false, 00:13:51.054 "hdgst": false, 00:13:51.054 "ddgst": false, 00:13:51.054 "dhchap_key": "key1", 00:13:51.054 "allow_unrecognized_csi": false, 00:13:51.054 "method": "bdev_nvme_attach_controller", 00:13:51.054 "req_id": 1 00:13:51.054 } 00:13:51.054 Got JSON-RPC error response 00:13:51.054 response: 00:13:51.054 { 00:13:51.054 "code": -5, 00:13:51.054 "message": "Input/output error" 00:13:51.054 } 00:13:51.313 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:51.313 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:51.313 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:51.313 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:51.313 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:51.313 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:51.313 12:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:52.251 nvme0n1 00:13:52.251 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:13:52.251 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:13:52.251 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:52.510 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:52.510 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:52.510 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:52.770 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:13:52.770 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.770 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.770 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.770 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:13:52.770 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:13:52.770 12:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:13:53.356 nvme0n1 00:13:53.356 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:13:53.356 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:13:53.356 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:53.615 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:53.615 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:53.615 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:53.875 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:53.875 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.875 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.875 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.875 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YTY5NDg2NTQzZjY1M2NhMGY0MTI4N2RmYTVlOGFiYmNRMQ8f: '' 2s 00:13:53.875 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:13:53.875 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:13:53.875 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YTY5NDg2NTQzZjY1M2NhMGY0MTI4N2RmYTVlOGFiYmNRMQ8f: 00:13:53.875 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:13:53.875 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:13:53.875 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:13:53.875 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YTY5NDg2NTQzZjY1M2NhMGY0MTI4N2RmYTVlOGFiYmNRMQ8f: ]] 00:13:53.875 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YTY5NDg2NTQzZjY1M2NhMGY0MTI4N2RmYTVlOGFiYmNRMQ8f: 00:13:53.875 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:13:53.875 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:13:53.875 12:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:13:55.782 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:13:55.782 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:13:55.782 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:13:55.782 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:13:55.782 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:13:55.782 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:13:55.782 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:13:55.782 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key1 --dhchap-ctrlr-key key2 00:13:55.782 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.782 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.782 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.782 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NjljZjJhY2JhNzM1ZDY5NmRlN2MzYTg0YTk5YjkyMmNmNTdlOGZmODEzYzlhYWY5CxoIFA==: 2s 00:13:55.782 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:13:55.782 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:13:55.782 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:13:55.783 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NjljZjJhY2JhNzM1ZDY5NmRlN2MzYTg0YTk5YjkyMmNmNTdlOGZmODEzYzlhYWY5CxoIFA==: 00:13:55.783 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:13:55.783 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:13:55.783 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:13:55.783 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NjljZjJhY2JhNzM1ZDY5NmRlN2MzYTg0YTk5YjkyMmNmNTdlOGZmODEzYzlhYWY5CxoIFA==: ]] 00:13:55.783 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NjljZjJhY2JhNzM1ZDY5NmRlN2MzYTg0YTk5YjkyMmNmNTdlOGZmODEzYzlhYWY5CxoIFA==: 00:13:55.783 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:13:55.783 12:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:13:58.319 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:13:58.319 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:13:58.319 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:13:58.319 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:13:58.319 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:13:58.319 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:13:58.319 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:13:58.319 12:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:58.319 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:58.319 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:58.319 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.319 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.319 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.319 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:58.319 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:58.319 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:58.887 nvme0n1 00:13:58.887 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:58.887 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.887 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.887 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.887 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:58.887 12:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:59.454 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:13:59.454 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:59.454 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:13:59.713 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:59.713 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:13:59.713 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.713 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.713 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.713 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:13:59.713 12:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:13:59.972 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:13:59.972 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:59.972 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:14:00.625 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:00.625 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:00.625 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.625 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.625 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.625 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:00.625 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:14:00.625 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:00.625 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:14:00.625 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:00.625 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:14:00.625 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:00.625 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:00.625 12:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:00.884 request: 00:14:00.884 { 00:14:00.884 "name": "nvme0", 00:14:00.884 "dhchap_key": "key1", 00:14:00.884 "dhchap_ctrlr_key": "key3", 00:14:00.884 "method": "bdev_nvme_set_keys", 00:14:00.884 "req_id": 1 00:14:00.884 } 00:14:00.884 Got JSON-RPC error response 00:14:00.884 response: 00:14:00.884 { 00:14:00.884 "code": -13, 00:14:00.884 "message": "Permission denied" 00:14:00.884 } 00:14:01.143 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:14:01.143 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:01.143 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:01.143 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:01.143 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:14:01.143 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:14:01.143 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.402 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:14:01.402 12:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:14:02.338 12:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:14:02.338 12:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:02.338 12:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:14:02.597 12:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:14:02.597 12:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:02.597 12:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.597 12:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.597 12:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.597 12:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:02.597 12:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:02.597 12:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:03.973 nvme0n1 00:14:03.973 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:03.973 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.973 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.973 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.973 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:03.973 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:14:03.973 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:03.973 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:14:03.973 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:03.973 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:14:03.973 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:03.973 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:03.973 12:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:04.539 request: 00:14:04.539 { 00:14:04.539 "name": "nvme0", 00:14:04.539 "dhchap_key": "key2", 00:14:04.539 "dhchap_ctrlr_key": "key0", 00:14:04.539 "method": "bdev_nvme_set_keys", 00:14:04.539 "req_id": 1 00:14:04.539 } 00:14:04.539 Got JSON-RPC error response 00:14:04.539 response: 00:14:04.539 { 00:14:04.539 "code": -13, 00:14:04.539 "message": "Permission denied" 00:14:04.539 } 00:14:04.539 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:14:04.539 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:04.539 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:04.539 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:04.539 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:14:04.539 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:04.539 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:14:04.798 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:14:04.798 12:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:14:05.735 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:14:05.735 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:14:05.735 12:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:05.994 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:14:05.994 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:14:05.994 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:14:05.994 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 80127 00:14:05.994 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 80127 ']' 00:14:05.994 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 80127 00:14:05.994 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:14:05.994 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:05.994 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80127 00:14:05.994 killing process with pid 80127 00:14:05.994 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:05.994 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:05.994 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80127' 00:14:05.994 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 80127 00:14:05.994 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 80127 00:14:06.254 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:14:06.254 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:14:06.254 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:14:06.254 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:06.254 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:14:06.254 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:06.254 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:06.254 rmmod nvme_tcp 00:14:06.254 rmmod nvme_fabrics 00:14:06.254 rmmod nvme_keyring 00:14:06.254 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:06.254 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:14:06.254 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:14:06.254 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@513 -- # '[' -n 83172 ']' 00:14:06.254 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # killprocess 83172 00:14:06.254 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 83172 ']' 00:14:06.254 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 83172 00:14:06.513 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:14:06.513 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:06.513 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83172 00:14:06.513 killing process with pid 83172 00:14:06.513 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:06.513 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:06.513 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83172' 00:14:06.513 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 83172 00:14:06.513 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 83172 00:14:06.513 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:14:06.513 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:14:06.513 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:14:06.513 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:14:06.513 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-save 00:14:06.513 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:14:06.513 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-restore 00:14:06.513 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:06.513 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:06.513 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:06.513 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:06.513 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:06.513 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:06.772 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:06.772 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:06.772 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:06.772 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:06.772 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:06.772 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:06.772 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:06.772 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:06.772 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:06.772 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:06.772 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.772 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:06.772 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.772 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:14:06.772 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.rNc /tmp/spdk.key-sha256.cxi /tmp/spdk.key-sha384.6Z8 /tmp/spdk.key-sha512.mCR /tmp/spdk.key-sha512.56S /tmp/spdk.key-sha384.Yt3 /tmp/spdk.key-sha256.Vtw '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:14:06.772 00:14:06.772 real 3m8.934s 00:14:06.772 user 7m34.888s 00:14:06.772 sys 0m27.906s 00:14:06.772 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:06.772 12:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.772 ************************************ 00:14:06.772 END TEST nvmf_auth_target 00:14:06.772 ************************************ 00:14:06.772 12:34:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:14:06.772 12:34:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:06.772 12:34:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:06.772 12:34:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:06.772 12:34:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:06.772 ************************************ 00:14:06.772 START TEST nvmf_bdevio_no_huge 00:14:06.772 ************************************ 00:14:06.772 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:07.033 * Looking for test storage... 00:14:07.033 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:07.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.033 --rc genhtml_branch_coverage=1 00:14:07.033 --rc genhtml_function_coverage=1 00:14:07.033 --rc genhtml_legend=1 00:14:07.033 --rc geninfo_all_blocks=1 00:14:07.033 --rc geninfo_unexecuted_blocks=1 00:14:07.033 00:14:07.033 ' 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:07.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.033 --rc genhtml_branch_coverage=1 00:14:07.033 --rc genhtml_function_coverage=1 00:14:07.033 --rc genhtml_legend=1 00:14:07.033 --rc geninfo_all_blocks=1 00:14:07.033 --rc geninfo_unexecuted_blocks=1 00:14:07.033 00:14:07.033 ' 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:07.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.033 --rc genhtml_branch_coverage=1 00:14:07.033 --rc genhtml_function_coverage=1 00:14:07.033 --rc genhtml_legend=1 00:14:07.033 --rc geninfo_all_blocks=1 00:14:07.033 --rc geninfo_unexecuted_blocks=1 00:14:07.033 00:14:07.033 ' 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:07.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.033 --rc genhtml_branch_coverage=1 00:14:07.033 --rc genhtml_function_coverage=1 00:14:07.033 --rc genhtml_legend=1 00:14:07.033 --rc geninfo_all_blocks=1 00:14:07.033 --rc geninfo_unexecuted_blocks=1 00:14:07.033 00:14:07.033 ' 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:07.033 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:07.034 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # prepare_net_devs 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@434 -- # local -g is_hw=no 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # remove_spdk_ns 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@456 -- # nvmf_veth_init 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:07.034 Cannot find device "nvmf_init_br" 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:07.034 Cannot find device "nvmf_init_br2" 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:07.034 Cannot find device "nvmf_tgt_br" 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:07.034 Cannot find device "nvmf_tgt_br2" 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:07.034 Cannot find device "nvmf_init_br" 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:14:07.034 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:07.293 Cannot find device "nvmf_init_br2" 00:14:07.293 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:14:07.293 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:07.293 Cannot find device "nvmf_tgt_br" 00:14:07.293 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:14:07.293 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:07.293 Cannot find device "nvmf_tgt_br2" 00:14:07.293 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:14:07.293 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:07.293 Cannot find device "nvmf_br" 00:14:07.293 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:14:07.293 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:07.293 Cannot find device "nvmf_init_if" 00:14:07.293 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:14:07.293 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:07.293 Cannot find device "nvmf_init_if2" 00:14:07.293 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:14:07.293 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:07.293 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:07.293 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:14:07.293 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:07.293 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:07.293 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:14:07.294 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:07.294 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:07.294 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:07.294 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:07.294 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:07.294 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:07.294 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:07.294 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:07.294 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:07.294 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:07.294 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:07.294 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:07.294 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:07.294 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:07.294 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:07.294 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:07.294 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:07.294 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:07.294 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:07.294 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:07.294 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:07.294 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:07.294 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:07.294 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:07.294 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:07.294 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:07.294 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:07.294 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:07.294 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:07.294 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:07.294 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:07.553 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:07.553 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:07.553 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:07.553 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:14:07.553 00:14:07.553 --- 10.0.0.3 ping statistics --- 00:14:07.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.553 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:14:07.553 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:07.553 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:07.554 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.084 ms 00:14:07.554 00:14:07.554 --- 10.0.0.4 ping statistics --- 00:14:07.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.554 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:14:07.554 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:07.554 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:07.554 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:14:07.554 00:14:07.554 --- 10.0.0.1 ping statistics --- 00:14:07.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.554 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:14:07.554 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:07.554 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:07.554 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:14:07.554 00:14:07.554 --- 10.0.0.2 ping statistics --- 00:14:07.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.554 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:14:07.554 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:07.554 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@457 -- # return 0 00:14:07.554 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:14:07.554 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:07.554 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:14:07.554 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:14:07.554 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:07.554 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:14:07.554 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:14:07.554 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:07.554 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:14:07.554 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:07.554 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:07.554 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # nvmfpid=83810 00:14:07.554 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # waitforlisten 83810 00:14:07.554 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 83810 ']' 00:14:07.554 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:14:07.554 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.554 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:07.554 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.554 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:07.554 12:34:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:07.554 [2024-11-19 12:34:12.661366] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:07.554 [2024-11-19 12:34:12.661507] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:14:07.554 [2024-11-19 12:34:12.808092] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:07.813 [2024-11-19 12:34:12.900935] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:07.813 [2024-11-19 12:34:12.901268] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:07.813 [2024-11-19 12:34:12.901361] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:07.813 [2024-11-19 12:34:12.901828] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:07.813 [2024-11-19 12:34:12.902308] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:07.813 [2024-11-19 12:34:12.902578] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:14:07.813 [2024-11-19 12:34:12.902864] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:14:07.813 [2024-11-19 12:34:12.902961] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:14:07.813 [2024-11-19 12:34:12.903541] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:14:07.813 [2024-11-19 12:34:12.908615] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:08.382 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:08.382 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:14:08.382 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:14:08.382 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:08.382 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:08.642 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:08.642 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:08.642 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.642 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:08.642 [2024-11-19 12:34:13.682251] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:08.642 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.642 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:08.642 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.642 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:08.642 Malloc0 00:14:08.642 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.642 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:08.642 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.642 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:08.642 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.642 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:08.642 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.642 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:08.642 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.642 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:08.642 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.642 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:08.642 [2024-11-19 12:34:13.724471] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:08.642 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.642 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:14:08.642 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:08.642 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # config=() 00:14:08.642 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # local subsystem config 00:14:08.642 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:14:08.642 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:14:08.642 { 00:14:08.642 "params": { 00:14:08.642 "name": "Nvme$subsystem", 00:14:08.642 "trtype": "$TEST_TRANSPORT", 00:14:08.642 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:08.642 "adrfam": "ipv4", 00:14:08.642 "trsvcid": "$NVMF_PORT", 00:14:08.642 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:08.642 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:08.642 "hdgst": ${hdgst:-false}, 00:14:08.642 "ddgst": ${ddgst:-false} 00:14:08.642 }, 00:14:08.642 "method": "bdev_nvme_attach_controller" 00:14:08.642 } 00:14:08.642 EOF 00:14:08.642 )") 00:14:08.642 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # cat 00:14:08.642 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # jq . 00:14:08.642 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@581 -- # IFS=, 00:14:08.642 12:34:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:14:08.642 "params": { 00:14:08.642 "name": "Nvme1", 00:14:08.642 "trtype": "tcp", 00:14:08.642 "traddr": "10.0.0.3", 00:14:08.642 "adrfam": "ipv4", 00:14:08.642 "trsvcid": "4420", 00:14:08.642 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:08.642 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:08.642 "hdgst": false, 00:14:08.642 "ddgst": false 00:14:08.643 }, 00:14:08.643 "method": "bdev_nvme_attach_controller" 00:14:08.643 }' 00:14:08.643 [2024-11-19 12:34:13.788180] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:08.643 [2024-11-19 12:34:13.788818] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid83846 ] 00:14:08.902 [2024-11-19 12:34:13.935405] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:08.902 [2024-11-19 12:34:14.041063] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:08.902 [2024-11-19 12:34:14.041228] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:08.902 [2024-11-19 12:34:14.041235] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.902 [2024-11-19 12:34:14.055961] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:09.162 I/O targets: 00:14:09.162 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:09.162 00:14:09.162 00:14:09.162 CUnit - A unit testing framework for C - Version 2.1-3 00:14:09.162 http://cunit.sourceforge.net/ 00:14:09.162 00:14:09.162 00:14:09.162 Suite: bdevio tests on: Nvme1n1 00:14:09.162 Test: blockdev write read block ...passed 00:14:09.162 Test: blockdev write zeroes read block ...passed 00:14:09.162 Test: blockdev write zeroes read no split ...passed 00:14:09.162 Test: blockdev write zeroes read split ...passed 00:14:09.162 Test: blockdev write zeroes read split partial ...passed 00:14:09.162 Test: blockdev reset ...[2024-11-19 12:34:14.274076] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:09.162 [2024-11-19 12:34:14.274182] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd26a0 (9): Bad file descriptor 00:14:09.162 passed 00:14:09.162 Test: blockdev write read 8 blocks ...[2024-11-19 12:34:14.293721] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:09.162 passed 00:14:09.162 Test: blockdev write read size > 128k ...passed 00:14:09.162 Test: blockdev write read invalid size ...passed 00:14:09.162 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:09.162 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:09.162 Test: blockdev write read max offset ...passed 00:14:09.162 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:09.162 Test: blockdev writev readv 8 blocks ...passed 00:14:09.162 Test: blockdev writev readv 30 x 1block ...passed 00:14:09.162 Test: blockdev writev readv block ...passed 00:14:09.162 Test: blockdev writev readv size > 128k ...passed 00:14:09.162 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:09.162 Test: blockdev comparev and writev ...[2024-11-19 12:34:14.302075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:09.162 [2024-11-19 12:34:14.302132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:09.162 [2024-11-19 12:34:14.302158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:09.162 [2024-11-19 12:34:14.302172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:09.162 passed 00:14:09.162 Test: blockdev nvme passthru rw ...[2024-11-19 12:34:14.302552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:09.162 [2024-11-19 12:34:14.302580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:09.162 [2024-11-19 12:34:14.302601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:09.162 [2024-11-19 12:34:14.302613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:09.162 [2024-11-19 12:34:14.302911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:09.162 [2024-11-19 12:34:14.302933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:09.162 [2024-11-19 12:34:14.302953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:09.162 [2024-11-19 12:34:14.302965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:09.162 [2024-11-19 12:34:14.303281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:09.162 [2024-11-19 12:34:14.303302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:09.162 [2024-11-19 12:34:14.303322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:09.162 [2024-11-19 12:34:14.303333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:09.162 passed 00:14:09.162 Test: blockdev nvme passthru vendor specific ...passed 00:14:09.162 Test: blockdev nvme admin passthru ...[2024-11-19 12:34:14.304224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:09.162 [2024-11-19 12:34:14.304254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:09.162 [2024-11-19 12:34:14.304371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:09.162 [2024-11-19 12:34:14.304390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:09.162 [2024-11-19 12:34:14.304503] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:09.162 [2024-11-19 12:34:14.304521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:09.162 [2024-11-19 12:34:14.304640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:09.162 [2024-11-19 12:34:14.304658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:09.162 passed 00:14:09.162 Test: blockdev copy ...passed 00:14:09.162 00:14:09.162 Run Summary: Type Total Ran Passed Failed Inactive 00:14:09.162 suites 1 1 n/a 0 0 00:14:09.162 tests 23 23 23 0 0 00:14:09.162 asserts 152 152 152 0 n/a 00:14:09.162 00:14:09.162 Elapsed time = 0.183 seconds 00:14:09.421 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:09.421 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.421 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:09.421 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.421 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:09.421 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:14:09.421 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # nvmfcleanup 00:14:09.421 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:14:09.679 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:09.679 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:14:09.679 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:09.679 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:09.679 rmmod nvme_tcp 00:14:09.679 rmmod nvme_fabrics 00:14:09.679 rmmod nvme_keyring 00:14:09.679 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:09.679 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:14:09.679 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:14:09.679 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@513 -- # '[' -n 83810 ']' 00:14:09.679 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # killprocess 83810 00:14:09.679 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 83810 ']' 00:14:09.679 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 83810 00:14:09.679 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:14:09.679 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:09.679 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83810 00:14:09.679 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:14:09.679 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:14:09.679 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83810' 00:14:09.679 killing process with pid 83810 00:14:09.679 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 83810 00:14:09.679 12:34:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 83810 00:14:09.937 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:14:09.937 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:14:09.937 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:14:09.937 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:14:09.937 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-save 00:14:09.937 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:14:09.937 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-restore 00:14:09.937 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:09.937 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:09.937 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:09.937 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:09.937 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:09.937 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:09.937 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:10.196 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:10.196 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:10.196 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:10.196 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:10.196 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:10.196 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:10.196 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:10.196 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:10.196 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:10.196 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.196 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:10.196 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.196 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:14:10.196 00:14:10.196 real 0m3.343s 00:14:10.196 user 0m10.137s 00:14:10.196 sys 0m1.329s 00:14:10.196 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:10.196 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:10.196 ************************************ 00:14:10.196 END TEST nvmf_bdevio_no_huge 00:14:10.197 ************************************ 00:14:10.197 12:34:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:10.197 12:34:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:10.197 12:34:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:10.197 12:34:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:10.197 ************************************ 00:14:10.197 START TEST nvmf_tls 00:14:10.197 ************************************ 00:14:10.197 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:10.457 * Looking for test storage... 00:14:10.457 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:10.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.457 --rc genhtml_branch_coverage=1 00:14:10.457 --rc genhtml_function_coverage=1 00:14:10.457 --rc genhtml_legend=1 00:14:10.457 --rc geninfo_all_blocks=1 00:14:10.457 --rc geninfo_unexecuted_blocks=1 00:14:10.457 00:14:10.457 ' 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:10.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.457 --rc genhtml_branch_coverage=1 00:14:10.457 --rc genhtml_function_coverage=1 00:14:10.457 --rc genhtml_legend=1 00:14:10.457 --rc geninfo_all_blocks=1 00:14:10.457 --rc geninfo_unexecuted_blocks=1 00:14:10.457 00:14:10.457 ' 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:10.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.457 --rc genhtml_branch_coverage=1 00:14:10.457 --rc genhtml_function_coverage=1 00:14:10.457 --rc genhtml_legend=1 00:14:10.457 --rc geninfo_all_blocks=1 00:14:10.457 --rc geninfo_unexecuted_blocks=1 00:14:10.457 00:14:10.457 ' 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:10.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.457 --rc genhtml_branch_coverage=1 00:14:10.457 --rc genhtml_function_coverage=1 00:14:10.457 --rc genhtml_legend=1 00:14:10.457 --rc geninfo_all_blocks=1 00:14:10.457 --rc geninfo_unexecuted_blocks=1 00:14:10.457 00:14:10.457 ' 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:10.457 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:10.457 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:10.458 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:10.458 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:10.458 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:14:10.458 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:14:10.458 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:10.458 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # prepare_net_devs 00:14:10.458 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@434 -- # local -g is_hw=no 00:14:10.458 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # remove_spdk_ns 00:14:10.458 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.458 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:10.458 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.458 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:14:10.458 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:14:10.458 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:14:10.458 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:14:10.458 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:14:10.458 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@456 -- # nvmf_veth_init 00:14:10.458 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:10.458 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:10.458 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:10.458 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:10.458 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:10.458 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:10.458 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:10.458 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:10.458 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:10.458 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:10.458 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:10.458 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:10.458 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:10.458 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:10.458 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:10.458 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:10.458 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:10.458 Cannot find device "nvmf_init_br" 00:14:10.458 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:14:10.458 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:10.458 Cannot find device "nvmf_init_br2" 00:14:10.458 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:14:10.458 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:10.458 Cannot find device "nvmf_tgt_br" 00:14:10.458 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:14:10.458 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:10.458 Cannot find device "nvmf_tgt_br2" 00:14:10.458 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:14:10.458 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:10.458 Cannot find device "nvmf_init_br" 00:14:10.458 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:14:10.458 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:10.458 Cannot find device "nvmf_init_br2" 00:14:10.458 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:14:10.458 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:10.717 Cannot find device "nvmf_tgt_br" 00:14:10.717 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:14:10.717 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:10.717 Cannot find device "nvmf_tgt_br2" 00:14:10.717 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:14:10.717 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:10.717 Cannot find device "nvmf_br" 00:14:10.717 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:14:10.717 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:10.717 Cannot find device "nvmf_init_if" 00:14:10.717 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:14:10.717 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:10.717 Cannot find device "nvmf_init_if2" 00:14:10.717 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:14:10.717 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:10.717 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:10.717 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:14:10.717 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:10.717 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:10.717 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:14:10.717 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:10.717 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:10.717 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:10.717 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:10.717 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:10.717 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:10.717 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:10.717 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:10.717 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:10.717 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:10.717 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:10.717 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:10.717 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:10.717 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:10.717 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:10.717 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:10.717 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:10.717 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:10.717 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:10.717 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:10.717 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:10.717 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:10.717 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:10.717 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:10.717 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:10.977 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:10.977 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:10.977 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:10.977 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:10.977 12:34:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:10.977 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:10.977 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:10.977 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:10.977 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:10.977 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:14:10.977 00:14:10.977 --- 10.0.0.3 ping statistics --- 00:14:10.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.977 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:14:10.977 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:10.977 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:10.977 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:14:10.977 00:14:10.977 --- 10.0.0.4 ping statistics --- 00:14:10.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.977 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:14:10.977 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:10.977 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:10.977 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:14:10.977 00:14:10.977 --- 10.0.0.1 ping statistics --- 00:14:10.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.977 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:14:10.977 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:10.977 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:10.977 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:14:10.977 00:14:10.977 --- 10.0.0.2 ping statistics --- 00:14:10.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.977 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:14:10.977 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:10.978 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@457 -- # return 0 00:14:10.978 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:14:10.978 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:10.978 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:14:10.978 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:14:10.978 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:10.978 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:14:10.978 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:14:10.978 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:14:10.978 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:14:10.978 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:10.978 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:10.978 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=84082 00:14:10.978 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:14:10.978 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 84082 00:14:10.978 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84082 ']' 00:14:10.978 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.978 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:10.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.978 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.978 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:10.978 12:34:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:10.978 [2024-11-19 12:34:16.110859] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:10.978 [2024-11-19 12:34:16.110970] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:11.237 [2024-11-19 12:34:16.253731] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.237 [2024-11-19 12:34:16.294438] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:11.237 [2024-11-19 12:34:16.294500] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:11.237 [2024-11-19 12:34:16.294515] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:11.237 [2024-11-19 12:34:16.294525] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:11.237 [2024-11-19 12:34:16.294535] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:11.238 [2024-11-19 12:34:16.294566] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:12.175 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:12.175 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:12.175 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:14:12.175 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:12.175 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:12.175 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:12.175 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:14:12.175 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:14:12.434 true 00:14:12.434 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:12.434 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:14:12.693 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:14:12.693 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:14:12.693 12:34:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:12.952 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:12.952 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:14:13.211 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:14:13.211 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:14:13.211 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:14:13.470 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:13.470 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:14:13.729 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:14:13.729 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:14:13.729 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:14:13.729 12:34:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:13.989 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:14:13.989 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:14:13.989 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:14:14.248 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:14:14.248 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:14.506 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:14:14.506 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:14:14.506 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:14:14.765 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:14.765 12:34:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:14:15.024 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:14:15.024 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:14:15.024 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:14:15.024 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:14:15.024 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:14:15.025 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:14:15.025 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:14:15.025 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:14:15.025 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:14:15.025 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:15.284 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:14:15.284 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:14:15.284 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:14:15.284 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:14:15.284 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=ffeeddccbbaa99887766554433221100 00:14:15.284 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:14:15.284 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:14:15.284 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:15.284 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:14:15.284 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.sXoIbfb440 00:14:15.284 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:14:15.284 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.chEALPnrGN 00:14:15.284 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:15.284 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:15.284 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.sXoIbfb440 00:14:15.284 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.chEALPnrGN 00:14:15.284 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:15.543 12:34:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:15.803 [2024-11-19 12:34:20.964839] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:15.803 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.sXoIbfb440 00:14:15.803 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.sXoIbfb440 00:14:15.803 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:16.062 [2024-11-19 12:34:21.272174] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:16.062 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:16.321 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:16.580 [2024-11-19 12:34:21.760426] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:16.580 [2024-11-19 12:34:21.760643] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:16.580 12:34:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:16.839 malloc0 00:14:17.099 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:17.359 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.sXoIbfb440 00:14:17.618 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:17.888 12:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.sXoIbfb440 00:14:27.887 Initializing NVMe Controllers 00:14:27.887 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:27.887 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:27.887 Initialization complete. Launching workers. 00:14:27.887 ======================================================== 00:14:27.887 Latency(us) 00:14:27.887 Device Information : IOPS MiB/s Average min max 00:14:27.887 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10173.89 39.74 6291.95 963.16 8878.18 00:14:27.887 ======================================================== 00:14:27.887 Total : 10173.89 39.74 6291.95 963.16 8878.18 00:14:27.887 00:14:27.887 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sXoIbfb440 00:14:27.888 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:27.888 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:27.888 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:27.888 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.sXoIbfb440 00:14:27.888 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:27.888 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84326 00:14:27.888 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:27.888 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:27.888 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84326 /var/tmp/bdevperf.sock 00:14:27.888 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84326 ']' 00:14:27.888 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:27.888 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:27.888 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:27.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:27.888 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:27.888 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:28.147 [2024-11-19 12:34:33.181366] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:28.147 [2024-11-19 12:34:33.181836] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84326 ] 00:14:28.147 [2024-11-19 12:34:33.339098] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.147 [2024-11-19 12:34:33.390058] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:28.406 [2024-11-19 12:34:33.427672] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:28.406 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:28.406 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:28.406 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.sXoIbfb440 00:14:28.666 12:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:28.925 [2024-11-19 12:34:34.049851] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:28.925 TLSTESTn1 00:14:28.925 12:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:29.184 Running I/O for 10 seconds... 00:14:31.058 3712.00 IOPS, 14.50 MiB/s [2024-11-19T12:34:37.695Z] 3721.50 IOPS, 14.54 MiB/s [2024-11-19T12:34:38.655Z] 3758.00 IOPS, 14.68 MiB/s [2024-11-19T12:34:39.632Z] 3857.75 IOPS, 15.07 MiB/s [2024-11-19T12:34:40.570Z] 3919.60 IOPS, 15.31 MiB/s [2024-11-19T12:34:41.508Z] 3971.83 IOPS, 15.51 MiB/s [2024-11-19T12:34:42.444Z] 4014.00 IOPS, 15.68 MiB/s [2024-11-19T12:34:43.382Z] 4041.62 IOPS, 15.79 MiB/s [2024-11-19T12:34:44.320Z] 4037.89 IOPS, 15.77 MiB/s [2024-11-19T12:34:44.320Z] 4058.50 IOPS, 15.85 MiB/s 00:14:39.060 Latency(us) 00:14:39.060 [2024-11-19T12:34:44.320Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.060 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:39.060 Verification LBA range: start 0x0 length 0x2000 00:14:39.060 TLSTESTn1 : 10.01 4065.24 15.88 0.00 0.00 31433.39 5064.15 44564.48 00:14:39.060 [2024-11-19T12:34:44.320Z] =================================================================================================================== 00:14:39.060 [2024-11-19T12:34:44.320Z] Total : 4065.24 15.88 0.00 0.00 31433.39 5064.15 44564.48 00:14:39.060 { 00:14:39.060 "results": [ 00:14:39.060 { 00:14:39.060 "job": "TLSTESTn1", 00:14:39.060 "core_mask": "0x4", 00:14:39.060 "workload": "verify", 00:14:39.060 "status": "finished", 00:14:39.060 "verify_range": { 00:14:39.060 "start": 0, 00:14:39.060 "length": 8192 00:14:39.060 }, 00:14:39.060 "queue_depth": 128, 00:14:39.060 "io_size": 4096, 00:14:39.060 "runtime": 10.014418, 00:14:39.060 "iops": 4065.238738786418, 00:14:39.060 "mibps": 15.879838823384445, 00:14:39.060 "io_failed": 0, 00:14:39.060 "io_timeout": 0, 00:14:39.060 "avg_latency_us": 31433.38711637015, 00:14:39.060 "min_latency_us": 5064.145454545454, 00:14:39.060 "max_latency_us": 44564.48 00:14:39.060 } 00:14:39.060 ], 00:14:39.060 "core_count": 1 00:14:39.060 } 00:14:39.060 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:39.060 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 84326 00:14:39.060 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84326 ']' 00:14:39.060 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84326 00:14:39.060 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:39.320 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:39.320 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84326 00:14:39.320 killing process with pid 84326 00:14:39.320 Received shutdown signal, test time was about 10.000000 seconds 00:14:39.320 00:14:39.320 Latency(us) 00:14:39.320 [2024-11-19T12:34:44.580Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.320 [2024-11-19T12:34:44.580Z] =================================================================================================================== 00:14:39.320 [2024-11-19T12:34:44.580Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:39.320 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:39.320 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:39.320 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84326' 00:14:39.320 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84326 00:14:39.320 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84326 00:14:39.320 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.chEALPnrGN 00:14:39.320 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:39.320 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.chEALPnrGN 00:14:39.320 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:39.320 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:39.320 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:39.320 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:39.320 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.chEALPnrGN 00:14:39.320 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:39.320 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:39.320 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:39.320 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.chEALPnrGN 00:14:39.320 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:39.320 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84453 00:14:39.320 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:39.320 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:39.320 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84453 /var/tmp/bdevperf.sock 00:14:39.320 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84453 ']' 00:14:39.320 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:39.320 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:39.320 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:39.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:39.320 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:39.320 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:39.320 [2024-11-19 12:34:44.532318] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:39.321 [2024-11-19 12:34:44.532662] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84453 ] 00:14:39.579 [2024-11-19 12:34:44.663709] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.579 [2024-11-19 12:34:44.701431] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:39.579 [2024-11-19 12:34:44.732852] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:39.579 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:39.579 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:39.579 12:34:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.chEALPnrGN 00:14:40.145 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:40.405 [2024-11-19 12:34:45.501570] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:40.405 [2024-11-19 12:34:45.509980] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:40.405 [2024-11-19 12:34:45.510868] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa20550 (107): Transport endpoint is not connected 00:14:40.405 [2024-11-19 12:34:45.511857] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa20550 (9): Bad file descriptor 00:14:40.405 [2024-11-19 12:34:45.512851] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:40.405 [2024-11-19 12:34:45.512881] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:40.405 [2024-11-19 12:34:45.512894] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:14:40.405 [2024-11-19 12:34:45.512909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:40.405 request: 00:14:40.405 { 00:14:40.405 "name": "TLSTEST", 00:14:40.405 "trtype": "tcp", 00:14:40.405 "traddr": "10.0.0.3", 00:14:40.405 "adrfam": "ipv4", 00:14:40.405 "trsvcid": "4420", 00:14:40.405 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:40.405 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:40.405 "prchk_reftag": false, 00:14:40.405 "prchk_guard": false, 00:14:40.405 "hdgst": false, 00:14:40.405 "ddgst": false, 00:14:40.405 "psk": "key0", 00:14:40.405 "allow_unrecognized_csi": false, 00:14:40.405 "method": "bdev_nvme_attach_controller", 00:14:40.405 "req_id": 1 00:14:40.405 } 00:14:40.405 Got JSON-RPC error response 00:14:40.405 response: 00:14:40.405 { 00:14:40.405 "code": -5, 00:14:40.405 "message": "Input/output error" 00:14:40.405 } 00:14:40.405 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 84453 00:14:40.405 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84453 ']' 00:14:40.405 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84453 00:14:40.405 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:40.405 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:40.405 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84453 00:14:40.405 killing process with pid 84453 00:14:40.405 Received shutdown signal, test time was about 10.000000 seconds 00:14:40.405 00:14:40.405 Latency(us) 00:14:40.405 [2024-11-19T12:34:45.665Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:40.405 [2024-11-19T12:34:45.665Z] =================================================================================================================== 00:14:40.405 [2024-11-19T12:34:45.665Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:40.405 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:40.405 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:40.405 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84453' 00:14:40.405 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84453 00:14:40.405 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84453 00:14:40.664 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:40.665 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:40.665 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:40.665 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:40.665 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:40.665 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.sXoIbfb440 00:14:40.665 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:40.665 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.sXoIbfb440 00:14:40.665 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:40.665 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:40.665 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:40.665 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:40.665 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.sXoIbfb440 00:14:40.665 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:40.665 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:40.665 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:14:40.665 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.sXoIbfb440 00:14:40.665 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:40.665 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84474 00:14:40.665 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:40.665 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:40.665 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84474 /var/tmp/bdevperf.sock 00:14:40.665 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84474 ']' 00:14:40.665 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:40.665 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:40.665 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:40.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:40.665 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:40.665 12:34:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:40.665 [2024-11-19 12:34:45.754952] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:40.665 [2024-11-19 12:34:45.755263] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84474 ] 00:14:40.665 [2024-11-19 12:34:45.890278] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.924 [2024-11-19 12:34:45.926160] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:40.924 [2024-11-19 12:34:45.953997] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:40.924 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:40.924 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:40.924 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.sXoIbfb440 00:14:41.183 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:14:41.442 [2024-11-19 12:34:46.673378] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:41.442 [2024-11-19 12:34:46.679794] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:41.442 [2024-11-19 12:34:46.680038] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:41.442 [2024-11-19 12:34:46.680244] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:41.442 [2024-11-19 12:34:46.680379] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x166f550 (107): Transport endpoint is not connected 00:14:41.442 [2024-11-19 12:34:46.681371] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x166f550 (9): Bad file descriptor 00:14:41.442 [2024-11-19 12:34:46.682368] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:41.442 [2024-11-19 12:34:46.682519] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:41.442 [2024-11-19 12:34:46.682549] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:14:41.442 [2024-11-19 12:34:46.682566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:41.442 request: 00:14:41.442 { 00:14:41.442 "name": "TLSTEST", 00:14:41.442 "trtype": "tcp", 00:14:41.442 "traddr": "10.0.0.3", 00:14:41.442 "adrfam": "ipv4", 00:14:41.442 "trsvcid": "4420", 00:14:41.442 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:41.442 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:14:41.442 "prchk_reftag": false, 00:14:41.442 "prchk_guard": false, 00:14:41.442 "hdgst": false, 00:14:41.442 "ddgst": false, 00:14:41.442 "psk": "key0", 00:14:41.442 "allow_unrecognized_csi": false, 00:14:41.442 "method": "bdev_nvme_attach_controller", 00:14:41.442 "req_id": 1 00:14:41.442 } 00:14:41.442 Got JSON-RPC error response 00:14:41.442 response: 00:14:41.442 { 00:14:41.442 "code": -5, 00:14:41.442 "message": "Input/output error" 00:14:41.442 } 00:14:41.702 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 84474 00:14:41.702 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84474 ']' 00:14:41.702 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84474 00:14:41.702 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:41.702 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:41.702 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84474 00:14:41.702 killing process with pid 84474 00:14:41.702 Received shutdown signal, test time was about 10.000000 seconds 00:14:41.702 00:14:41.702 Latency(us) 00:14:41.702 [2024-11-19T12:34:46.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:41.702 [2024-11-19T12:34:46.962Z] =================================================================================================================== 00:14:41.702 [2024-11-19T12:34:46.962Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:41.702 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:41.702 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:41.702 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84474' 00:14:41.702 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84474 00:14:41.702 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84474 00:14:41.702 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:41.702 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:41.702 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:41.702 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:41.702 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:41.702 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.sXoIbfb440 00:14:41.702 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:41.702 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.sXoIbfb440 00:14:41.702 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:41.702 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:41.702 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:41.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:41.702 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:41.702 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.sXoIbfb440 00:14:41.702 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:41.702 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:14:41.702 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:41.702 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.sXoIbfb440 00:14:41.702 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:41.702 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84495 00:14:41.702 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:41.702 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:41.702 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84495 /var/tmp/bdevperf.sock 00:14:41.702 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84495 ']' 00:14:41.702 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:41.702 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:41.702 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:41.702 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:41.702 12:34:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:41.702 [2024-11-19 12:34:46.941574] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:41.702 [2024-11-19 12:34:46.941861] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84495 ] 00:14:41.961 [2024-11-19 12:34:47.076973] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.961 [2024-11-19 12:34:47.112194] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:41.961 [2024-11-19 12:34:47.141205] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:41.961 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:41.961 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:41.961 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.sXoIbfb440 00:14:42.529 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:42.529 [2024-11-19 12:34:47.760828] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:42.529 [2024-11-19 12:34:47.769877] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:42.529 [2024-11-19 12:34:47.770112] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:42.529 [2024-11-19 12:34:47.770299] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:42.529 [2024-11-19 12:34:47.770475] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f6c550 (107): Transport endpoint is not connected 00:14:42.529 [2024-11-19 12:34:47.771479] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f6c550 (9): Bad file descriptor 00:14:42.529 [2024-11-19 12:34:47.772462] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:14:42.529 [2024-11-19 12:34:47.772480] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:42.529 [2024-11-19 12:34:47.772505] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:14:42.529 [2024-11-19 12:34:47.772518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:14:42.529 request: 00:14:42.529 { 00:14:42.529 "name": "TLSTEST", 00:14:42.529 "trtype": "tcp", 00:14:42.529 "traddr": "10.0.0.3", 00:14:42.529 "adrfam": "ipv4", 00:14:42.529 "trsvcid": "4420", 00:14:42.529 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:14:42.529 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:42.529 "prchk_reftag": false, 00:14:42.529 "prchk_guard": false, 00:14:42.529 "hdgst": false, 00:14:42.529 "ddgst": false, 00:14:42.529 "psk": "key0", 00:14:42.529 "allow_unrecognized_csi": false, 00:14:42.529 "method": "bdev_nvme_attach_controller", 00:14:42.529 "req_id": 1 00:14:42.529 } 00:14:42.529 Got JSON-RPC error response 00:14:42.529 response: 00:14:42.529 { 00:14:42.529 "code": -5, 00:14:42.529 "message": "Input/output error" 00:14:42.529 } 00:14:42.788 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 84495 00:14:42.788 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84495 ']' 00:14:42.788 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84495 00:14:42.788 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:42.788 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:42.788 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84495 00:14:42.788 killing process with pid 84495 00:14:42.788 Received shutdown signal, test time was about 10.000000 seconds 00:14:42.788 00:14:42.788 Latency(us) 00:14:42.788 [2024-11-19T12:34:48.048Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.788 [2024-11-19T12:34:48.048Z] =================================================================================================================== 00:14:42.788 [2024-11-19T12:34:48.048Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:42.788 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:42.788 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:42.788 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84495' 00:14:42.788 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84495 00:14:42.788 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84495 00:14:42.788 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:42.789 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:42.789 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:42.789 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:42.789 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:42.789 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:42.789 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:42.789 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:42.789 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:42.789 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:42.789 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:42.789 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:42.789 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:42.789 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:42.789 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:42.789 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:42.789 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:14:42.789 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:42.789 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:42.789 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84516 00:14:42.789 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:42.789 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84516 /var/tmp/bdevperf.sock 00:14:42.789 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84516 ']' 00:14:42.789 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:42.789 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:42.789 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:42.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:42.789 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:42.789 12:34:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:42.789 [2024-11-19 12:34:48.005284] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:42.789 [2024-11-19 12:34:48.005525] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84516 ] 00:14:43.049 [2024-11-19 12:34:48.140002] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.049 [2024-11-19 12:34:48.178125] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:43.049 [2024-11-19 12:34:48.208841] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:43.049 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:43.049 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:43.049 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:14:43.616 [2024-11-19 12:34:48.569333] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:14:43.617 [2024-11-19 12:34:48.569379] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:43.617 request: 00:14:43.617 { 00:14:43.617 "name": "key0", 00:14:43.617 "path": "", 00:14:43.617 "method": "keyring_file_add_key", 00:14:43.617 "req_id": 1 00:14:43.617 } 00:14:43.617 Got JSON-RPC error response 00:14:43.617 response: 00:14:43.617 { 00:14:43.617 "code": -1, 00:14:43.617 "message": "Operation not permitted" 00:14:43.617 } 00:14:43.617 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:43.617 [2024-11-19 12:34:48.861475] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:43.617 [2024-11-19 12:34:48.861810] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:14:43.617 request: 00:14:43.617 { 00:14:43.617 "name": "TLSTEST", 00:14:43.617 "trtype": "tcp", 00:14:43.617 "traddr": "10.0.0.3", 00:14:43.617 "adrfam": "ipv4", 00:14:43.617 "trsvcid": "4420", 00:14:43.617 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:43.617 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:43.617 "prchk_reftag": false, 00:14:43.617 "prchk_guard": false, 00:14:43.617 "hdgst": false, 00:14:43.617 "ddgst": false, 00:14:43.617 "psk": "key0", 00:14:43.617 "allow_unrecognized_csi": false, 00:14:43.617 "method": "bdev_nvme_attach_controller", 00:14:43.617 "req_id": 1 00:14:43.617 } 00:14:43.617 Got JSON-RPC error response 00:14:43.617 response: 00:14:43.617 { 00:14:43.617 "code": -126, 00:14:43.617 "message": "Required key not available" 00:14:43.617 } 00:14:43.877 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 84516 00:14:43.877 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84516 ']' 00:14:43.877 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84516 00:14:43.877 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:43.877 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:43.877 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84516 00:14:43.877 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:43.877 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:43.877 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84516' 00:14:43.877 killing process with pid 84516 00:14:43.877 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84516 00:14:43.877 Received shutdown signal, test time was about 10.000000 seconds 00:14:43.877 00:14:43.877 Latency(us) 00:14:43.877 [2024-11-19T12:34:49.137Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.877 [2024-11-19T12:34:49.137Z] =================================================================================================================== 00:14:43.877 [2024-11-19T12:34:49.137Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:43.877 12:34:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84516 00:14:43.877 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:43.877 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:43.877 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:43.877 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:43.877 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:43.877 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 84082 00:14:43.877 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84082 ']' 00:14:43.877 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84082 00:14:43.877 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:43.877 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:43.877 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84082 00:14:43.877 killing process with pid 84082 00:14:43.877 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:43.877 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:43.877 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84082' 00:14:43.877 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84082 00:14:43.877 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84082 00:14:44.145 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:14:44.145 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:14:44.145 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:14:44.145 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:14:44.145 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:14:44.145 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=2 00:14:44.146 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:14:44.146 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:44.146 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:14:44.146 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.ZwyV5UGRwu 00:14:44.146 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:44.146 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.ZwyV5UGRwu 00:14:44.146 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:14:44.146 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:14:44.146 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:44.146 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:44.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.146 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=84551 00:14:44.146 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 84551 00:14:44.146 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84551 ']' 00:14:44.146 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:44.146 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.146 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:44.146 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.146 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:44.146 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:44.146 [2024-11-19 12:34:49.360912] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:44.146 [2024-11-19 12:34:49.361468] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:44.409 [2024-11-19 12:34:49.503444] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.410 [2024-11-19 12:34:49.538744] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:44.410 [2024-11-19 12:34:49.539031] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:44.410 [2024-11-19 12:34:49.539220] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:44.410 [2024-11-19 12:34:49.539343] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:44.410 [2024-11-19 12:34:49.539385] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:44.410 [2024-11-19 12:34:49.539520] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:44.410 [2024-11-19 12:34:49.569124] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:44.410 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:44.410 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:44.410 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:14:44.410 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:44.410 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:44.410 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:44.410 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.ZwyV5UGRwu 00:14:44.410 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ZwyV5UGRwu 00:14:44.410 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:44.979 [2024-11-19 12:34:49.944313] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:44.979 12:34:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:44.979 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:45.548 [2024-11-19 12:34:50.496468] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:45.548 [2024-11-19 12:34:50.496744] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:45.548 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:45.548 malloc0 00:14:45.548 12:34:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:45.807 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ZwyV5UGRwu 00:14:46.065 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:46.324 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZwyV5UGRwu 00:14:46.324 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:46.324 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:46.324 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:46.324 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ZwyV5UGRwu 00:14:46.324 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:46.324 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:46.324 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84605 00:14:46.324 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:46.324 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84605 /var/tmp/bdevperf.sock 00:14:46.324 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84605 ']' 00:14:46.324 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:46.324 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:46.324 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:46.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:46.324 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:46.324 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:46.324 [2024-11-19 12:34:51.570303] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:46.324 [2024-11-19 12:34:51.570408] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84605 ] 00:14:46.583 [2024-11-19 12:34:51.709347] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.583 [2024-11-19 12:34:51.751596] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:46.583 [2024-11-19 12:34:51.785766] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:46.583 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:46.583 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:46.583 12:34:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZwyV5UGRwu 00:14:46.842 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:47.101 [2024-11-19 12:34:52.299968] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:47.360 TLSTESTn1 00:14:47.360 12:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:47.360 Running I/O for 10 seconds... 00:14:49.255 4044.00 IOPS, 15.80 MiB/s [2024-11-19T12:34:55.894Z] 4032.00 IOPS, 15.75 MiB/s [2024-11-19T12:34:56.830Z] 4096.00 IOPS, 16.00 MiB/s [2024-11-19T12:34:57.768Z] 4096.50 IOPS, 16.00 MiB/s [2024-11-19T12:34:58.705Z] 4120.40 IOPS, 16.10 MiB/s [2024-11-19T12:34:59.642Z] 4096.00 IOPS, 16.00 MiB/s [2024-11-19T12:35:00.579Z] 4112.57 IOPS, 16.06 MiB/s [2024-11-19T12:35:01.515Z] 4112.12 IOPS, 16.06 MiB/s [2024-11-19T12:35:02.893Z] 4081.78 IOPS, 15.94 MiB/s [2024-11-19T12:35:02.893Z] 4044.80 IOPS, 15.80 MiB/s 00:14:57.633 Latency(us) 00:14:57.633 [2024-11-19T12:35:02.893Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.633 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:57.633 Verification LBA range: start 0x0 length 0x2000 00:14:57.633 TLSTESTn1 : 10.03 4046.89 15.81 0.00 0.00 31566.18 8579.26 23592.96 00:14:57.634 [2024-11-19T12:35:02.894Z] =================================================================================================================== 00:14:57.634 [2024-11-19T12:35:02.894Z] Total : 4046.89 15.81 0.00 0.00 31566.18 8579.26 23592.96 00:14:57.634 { 00:14:57.634 "results": [ 00:14:57.634 { 00:14:57.634 "job": "TLSTESTn1", 00:14:57.634 "core_mask": "0x4", 00:14:57.634 "workload": "verify", 00:14:57.634 "status": "finished", 00:14:57.634 "verify_range": { 00:14:57.634 "start": 0, 00:14:57.634 "length": 8192 00:14:57.634 }, 00:14:57.634 "queue_depth": 128, 00:14:57.634 "io_size": 4096, 00:14:57.634 "runtime": 10.026462, 00:14:57.634 "iops": 4046.891116726917, 00:14:57.634 "mibps": 15.80816842471452, 00:14:57.634 "io_failed": 0, 00:14:57.634 "io_timeout": 0, 00:14:57.634 "avg_latency_us": 31566.175073128765, 00:14:57.634 "min_latency_us": 8579.258181818182, 00:14:57.634 "max_latency_us": 23592.96 00:14:57.634 } 00:14:57.634 ], 00:14:57.634 "core_count": 1 00:14:57.634 } 00:14:57.634 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:57.634 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 84605 00:14:57.634 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84605 ']' 00:14:57.634 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84605 00:14:57.634 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:57.634 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:57.634 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84605 00:14:57.634 killing process with pid 84605 00:14:57.634 Received shutdown signal, test time was about 10.000000 seconds 00:14:57.634 00:14:57.634 Latency(us) 00:14:57.634 [2024-11-19T12:35:02.894Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.634 [2024-11-19T12:35:02.894Z] =================================================================================================================== 00:14:57.634 [2024-11-19T12:35:02.894Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:57.634 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:57.634 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:57.634 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84605' 00:14:57.634 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84605 00:14:57.634 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84605 00:14:57.634 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.ZwyV5UGRwu 00:14:57.634 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZwyV5UGRwu 00:14:57.634 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:57.634 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZwyV5UGRwu 00:14:57.634 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:57.634 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:57.634 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:57.634 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:57.634 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZwyV5UGRwu 00:14:57.634 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:57.634 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:57.634 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:57.634 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ZwyV5UGRwu 00:14:57.634 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:57.634 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84729 00:14:57.634 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:57.634 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:57.634 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84729 /var/tmp/bdevperf.sock 00:14:57.634 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84729 ']' 00:14:57.634 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:57.634 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:57.634 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:57.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:57.634 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:57.634 12:35:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:57.634 [2024-11-19 12:35:02.802175] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:57.634 [2024-11-19 12:35:02.802295] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84729 ] 00:14:57.894 [2024-11-19 12:35:02.945451] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.894 [2024-11-19 12:35:02.982185] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:57.894 [2024-11-19 12:35:03.012039] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:57.894 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:57.894 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:57.894 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZwyV5UGRwu 00:14:58.153 [2024-11-19 12:35:03.380294] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ZwyV5UGRwu': 0100666 00:14:58.153 [2024-11-19 12:35:03.380346] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:58.153 request: 00:14:58.153 { 00:14:58.153 "name": "key0", 00:14:58.153 "path": "/tmp/tmp.ZwyV5UGRwu", 00:14:58.153 "method": "keyring_file_add_key", 00:14:58.153 "req_id": 1 00:14:58.153 } 00:14:58.153 Got JSON-RPC error response 00:14:58.153 response: 00:14:58.153 { 00:14:58.153 "code": -1, 00:14:58.153 "message": "Operation not permitted" 00:14:58.153 } 00:14:58.153 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:58.723 [2024-11-19 12:35:03.708466] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:58.723 [2024-11-19 12:35:03.708560] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:14:58.723 request: 00:14:58.723 { 00:14:58.723 "name": "TLSTEST", 00:14:58.723 "trtype": "tcp", 00:14:58.723 "traddr": "10.0.0.3", 00:14:58.723 "adrfam": "ipv4", 00:14:58.723 "trsvcid": "4420", 00:14:58.723 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:58.723 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:58.723 "prchk_reftag": false, 00:14:58.723 "prchk_guard": false, 00:14:58.723 "hdgst": false, 00:14:58.723 "ddgst": false, 00:14:58.723 "psk": "key0", 00:14:58.723 "allow_unrecognized_csi": false, 00:14:58.723 "method": "bdev_nvme_attach_controller", 00:14:58.723 "req_id": 1 00:14:58.723 } 00:14:58.723 Got JSON-RPC error response 00:14:58.723 response: 00:14:58.723 { 00:14:58.723 "code": -126, 00:14:58.723 "message": "Required key not available" 00:14:58.723 } 00:14:58.723 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 84729 00:14:58.723 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84729 ']' 00:14:58.723 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84729 00:14:58.723 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:58.723 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:58.723 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84729 00:14:58.723 killing process with pid 84729 00:14:58.723 Received shutdown signal, test time was about 10.000000 seconds 00:14:58.723 00:14:58.723 Latency(us) 00:14:58.723 [2024-11-19T12:35:03.983Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:58.723 [2024-11-19T12:35:03.983Z] =================================================================================================================== 00:14:58.723 [2024-11-19T12:35:03.983Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:58.723 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:58.723 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:58.723 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84729' 00:14:58.723 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84729 00:14:58.723 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84729 00:14:58.723 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:58.723 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:58.723 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:58.723 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:58.723 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:58.723 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 84551 00:14:58.723 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84551 ']' 00:14:58.723 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84551 00:14:58.723 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:58.723 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:58.723 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84551 00:14:58.723 killing process with pid 84551 00:14:58.723 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:58.723 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:58.723 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84551' 00:14:58.723 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84551 00:14:58.723 12:35:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84551 00:14:58.983 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:14:58.983 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:14:58.983 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:58.983 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:58.983 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=84759 00:14:58.983 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 84759 00:14:58.983 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:58.983 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84759 ']' 00:14:58.983 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.983 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:58.983 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.983 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:58.983 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:58.983 [2024-11-19 12:35:04.158017] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:58.983 [2024-11-19 12:35:04.158162] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.242 [2024-11-19 12:35:04.292332] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.242 [2024-11-19 12:35:04.329361] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:59.242 [2024-11-19 12:35:04.329431] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:59.242 [2024-11-19 12:35:04.329442] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:59.243 [2024-11-19 12:35:04.329450] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:59.243 [2024-11-19 12:35:04.329456] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:59.243 [2024-11-19 12:35:04.329485] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:59.243 [2024-11-19 12:35:04.360363] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:59.243 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:59.243 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:59.243 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:14:59.243 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:59.243 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:59.243 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:59.243 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.ZwyV5UGRwu 00:14:59.243 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:59.243 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.ZwyV5UGRwu 00:14:59.243 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:14:59.243 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:59.243 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:14:59.243 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:59.243 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.ZwyV5UGRwu 00:14:59.243 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ZwyV5UGRwu 00:14:59.243 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:59.812 [2024-11-19 12:35:04.782385] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:59.812 12:35:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:00.070 12:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:00.330 [2024-11-19 12:35:05.438723] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:00.330 [2024-11-19 12:35:05.438974] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:00.330 12:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:00.589 malloc0 00:15:00.589 12:35:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:01.159 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ZwyV5UGRwu 00:15:01.159 [2024-11-19 12:35:06.414367] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ZwyV5UGRwu': 0100666 00:15:01.159 [2024-11-19 12:35:06.414423] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:15:01.418 request: 00:15:01.418 { 00:15:01.418 "name": "key0", 00:15:01.418 "path": "/tmp/tmp.ZwyV5UGRwu", 00:15:01.418 "method": "keyring_file_add_key", 00:15:01.418 "req_id": 1 00:15:01.418 } 00:15:01.418 Got JSON-RPC error response 00:15:01.418 response: 00:15:01.418 { 00:15:01.418 "code": -1, 00:15:01.418 "message": "Operation not permitted" 00:15:01.418 } 00:15:01.418 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:01.678 [2024-11-19 12:35:06.718446] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:15:01.678 [2024-11-19 12:35:06.718550] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:15:01.678 request: 00:15:01.678 { 00:15:01.678 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:01.678 "host": "nqn.2016-06.io.spdk:host1", 00:15:01.678 "psk": "key0", 00:15:01.678 "method": "nvmf_subsystem_add_host", 00:15:01.678 "req_id": 1 00:15:01.678 } 00:15:01.678 Got JSON-RPC error response 00:15:01.678 response: 00:15:01.678 { 00:15:01.678 "code": -32603, 00:15:01.678 "message": "Internal error" 00:15:01.678 } 00:15:01.678 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:15:01.678 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:01.678 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:01.678 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:01.678 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 84759 00:15:01.678 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84759 ']' 00:15:01.678 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84759 00:15:01.678 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:01.678 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:01.678 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84759 00:15:01.678 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:01.678 killing process with pid 84759 00:15:01.678 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:01.678 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84759' 00:15:01.678 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84759 00:15:01.678 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84759 00:15:01.678 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.ZwyV5UGRwu 00:15:01.678 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:15:01.678 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:01.678 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:01.678 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:01.678 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:01.678 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=84822 00:15:01.678 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 84822 00:15:01.678 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84822 ']' 00:15:01.678 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.678 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:01.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.678 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.678 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:01.678 12:35:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:01.937 [2024-11-19 12:35:06.974280] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:01.937 [2024-11-19 12:35:06.974369] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:01.937 [2024-11-19 12:35:07.111930] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.937 [2024-11-19 12:35:07.152719] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:01.937 [2024-11-19 12:35:07.152808] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:01.937 [2024-11-19 12:35:07.152831] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:01.937 [2024-11-19 12:35:07.152841] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:01.937 [2024-11-19 12:35:07.152850] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:01.937 [2024-11-19 12:35:07.152883] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:01.937 [2024-11-19 12:35:07.186981] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:02.931 12:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:02.931 12:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:02.931 12:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:02.931 12:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:02.931 12:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:02.931 12:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:02.931 12:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.ZwyV5UGRwu 00:15:02.931 12:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ZwyV5UGRwu 00:15:02.931 12:35:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:02.931 [2024-11-19 12:35:08.174203] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:03.189 12:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:03.189 12:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:03.754 [2024-11-19 12:35:08.718357] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:03.754 [2024-11-19 12:35:08.718622] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:03.754 12:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:03.754 malloc0 00:15:03.754 12:35:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:04.013 12:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ZwyV5UGRwu 00:15:04.303 12:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:04.562 12:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=84876 00:15:04.563 12:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:04.563 12:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:04.563 12:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 84876 /var/tmp/bdevperf.sock 00:15:04.563 12:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84876 ']' 00:15:04.563 12:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:04.563 12:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:04.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:04.563 12:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:04.563 12:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:04.563 12:35:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:04.822 [2024-11-19 12:35:09.835959] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:04.822 [2024-11-19 12:35:09.836038] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84876 ] 00:15:04.822 [2024-11-19 12:35:09.973510] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.822 [2024-11-19 12:35:10.014518] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:04.822 [2024-11-19 12:35:10.047228] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:05.080 12:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:05.080 12:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:05.080 12:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZwyV5UGRwu 00:15:05.339 12:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:05.598 [2024-11-19 12:35:10.636533] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:05.598 TLSTESTn1 00:15:05.598 12:35:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:15:06.167 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:15:06.167 "subsystems": [ 00:15:06.167 { 00:15:06.167 "subsystem": "keyring", 00:15:06.167 "config": [ 00:15:06.167 { 00:15:06.167 "method": "keyring_file_add_key", 00:15:06.167 "params": { 00:15:06.167 "name": "key0", 00:15:06.167 "path": "/tmp/tmp.ZwyV5UGRwu" 00:15:06.167 } 00:15:06.167 } 00:15:06.167 ] 00:15:06.167 }, 00:15:06.167 { 00:15:06.167 "subsystem": "iobuf", 00:15:06.167 "config": [ 00:15:06.167 { 00:15:06.167 "method": "iobuf_set_options", 00:15:06.167 "params": { 00:15:06.167 "small_pool_count": 8192, 00:15:06.167 "large_pool_count": 1024, 00:15:06.167 "small_bufsize": 8192, 00:15:06.167 "large_bufsize": 135168 00:15:06.167 } 00:15:06.167 } 00:15:06.167 ] 00:15:06.167 }, 00:15:06.167 { 00:15:06.167 "subsystem": "sock", 00:15:06.167 "config": [ 00:15:06.167 { 00:15:06.167 "method": "sock_set_default_impl", 00:15:06.167 "params": { 00:15:06.167 "impl_name": "uring" 00:15:06.167 } 00:15:06.167 }, 00:15:06.167 { 00:15:06.167 "method": "sock_impl_set_options", 00:15:06.167 "params": { 00:15:06.167 "impl_name": "ssl", 00:15:06.167 "recv_buf_size": 4096, 00:15:06.167 "send_buf_size": 4096, 00:15:06.167 "enable_recv_pipe": true, 00:15:06.167 "enable_quickack": false, 00:15:06.167 "enable_placement_id": 0, 00:15:06.167 "enable_zerocopy_send_server": true, 00:15:06.167 "enable_zerocopy_send_client": false, 00:15:06.167 "zerocopy_threshold": 0, 00:15:06.167 "tls_version": 0, 00:15:06.167 "enable_ktls": false 00:15:06.167 } 00:15:06.167 }, 00:15:06.167 { 00:15:06.167 "method": "sock_impl_set_options", 00:15:06.167 "params": { 00:15:06.167 "impl_name": "posix", 00:15:06.167 "recv_buf_size": 2097152, 00:15:06.167 "send_buf_size": 2097152, 00:15:06.167 "enable_recv_pipe": true, 00:15:06.167 "enable_quickack": false, 00:15:06.167 "enable_placement_id": 0, 00:15:06.167 "enable_zerocopy_send_server": true, 00:15:06.167 "enable_zerocopy_send_client": false, 00:15:06.167 "zerocopy_threshold": 0, 00:15:06.167 "tls_version": 0, 00:15:06.167 "enable_ktls": false 00:15:06.167 } 00:15:06.167 }, 00:15:06.167 { 00:15:06.167 "method": "sock_impl_set_options", 00:15:06.167 "params": { 00:15:06.167 "impl_name": "uring", 00:15:06.167 "recv_buf_size": 2097152, 00:15:06.167 "send_buf_size": 2097152, 00:15:06.167 "enable_recv_pipe": true, 00:15:06.167 "enable_quickack": false, 00:15:06.167 "enable_placement_id": 0, 00:15:06.167 "enable_zerocopy_send_server": false, 00:15:06.167 "enable_zerocopy_send_client": false, 00:15:06.167 "zerocopy_threshold": 0, 00:15:06.167 "tls_version": 0, 00:15:06.167 "enable_ktls": false 00:15:06.167 } 00:15:06.167 } 00:15:06.167 ] 00:15:06.167 }, 00:15:06.167 { 00:15:06.167 "subsystem": "vmd", 00:15:06.167 "config": [] 00:15:06.168 }, 00:15:06.168 { 00:15:06.168 "subsystem": "accel", 00:15:06.168 "config": [ 00:15:06.168 { 00:15:06.168 "method": "accel_set_options", 00:15:06.168 "params": { 00:15:06.168 "small_cache_size": 128, 00:15:06.168 "large_cache_size": 16, 00:15:06.168 "task_count": 2048, 00:15:06.168 "sequence_count": 2048, 00:15:06.168 "buf_count": 2048 00:15:06.168 } 00:15:06.168 } 00:15:06.168 ] 00:15:06.168 }, 00:15:06.168 { 00:15:06.168 "subsystem": "bdev", 00:15:06.168 "config": [ 00:15:06.168 { 00:15:06.168 "method": "bdev_set_options", 00:15:06.168 "params": { 00:15:06.168 "bdev_io_pool_size": 65535, 00:15:06.168 "bdev_io_cache_size": 256, 00:15:06.168 "bdev_auto_examine": true, 00:15:06.168 "iobuf_small_cache_size": 128, 00:15:06.168 "iobuf_large_cache_size": 16 00:15:06.168 } 00:15:06.168 }, 00:15:06.168 { 00:15:06.168 "method": "bdev_raid_set_options", 00:15:06.168 "params": { 00:15:06.168 "process_window_size_kb": 1024, 00:15:06.168 "process_max_bandwidth_mb_sec": 0 00:15:06.168 } 00:15:06.168 }, 00:15:06.168 { 00:15:06.168 "method": "bdev_iscsi_set_options", 00:15:06.168 "params": { 00:15:06.168 "timeout_sec": 30 00:15:06.168 } 00:15:06.168 }, 00:15:06.168 { 00:15:06.168 "method": "bdev_nvme_set_options", 00:15:06.168 "params": { 00:15:06.168 "action_on_timeout": "none", 00:15:06.168 "timeout_us": 0, 00:15:06.168 "timeout_admin_us": 0, 00:15:06.168 "keep_alive_timeout_ms": 10000, 00:15:06.168 "arbitration_burst": 0, 00:15:06.168 "low_priority_weight": 0, 00:15:06.168 "medium_priority_weight": 0, 00:15:06.168 "high_priority_weight": 0, 00:15:06.168 "nvme_adminq_poll_period_us": 10000, 00:15:06.168 "nvme_ioq_poll_period_us": 0, 00:15:06.168 "io_queue_requests": 0, 00:15:06.168 "delay_cmd_submit": true, 00:15:06.168 "transport_retry_count": 4, 00:15:06.168 "bdev_retry_count": 3, 00:15:06.168 "transport_ack_timeout": 0, 00:15:06.168 "ctrlr_loss_timeout_sec": 0, 00:15:06.168 "reconnect_delay_sec": 0, 00:15:06.168 "fast_io_fail_timeout_sec": 0, 00:15:06.168 "disable_auto_failback": false, 00:15:06.168 "generate_uuids": false, 00:15:06.168 "transport_tos": 0, 00:15:06.168 "nvme_error_stat": false, 00:15:06.168 "rdma_srq_size": 0, 00:15:06.168 "io_path_stat": false, 00:15:06.168 "allow_accel_sequence": false, 00:15:06.168 "rdma_max_cq_size": 0, 00:15:06.168 "rdma_cm_event_timeout_ms": 0, 00:15:06.168 "dhchap_digests": [ 00:15:06.168 "sha256", 00:15:06.168 "sha384", 00:15:06.168 "sha512" 00:15:06.168 ], 00:15:06.168 "dhchap_dhgroups": [ 00:15:06.168 "null", 00:15:06.168 "ffdhe2048", 00:15:06.168 "ffdhe3072", 00:15:06.168 "ffdhe4096", 00:15:06.168 "ffdhe6144", 00:15:06.168 "ffdhe8192" 00:15:06.168 ] 00:15:06.168 } 00:15:06.168 }, 00:15:06.168 { 00:15:06.168 "method": "bdev_nvme_set_hotplug", 00:15:06.168 "params": { 00:15:06.168 "period_us": 100000, 00:15:06.168 "enable": false 00:15:06.168 } 00:15:06.168 }, 00:15:06.168 { 00:15:06.168 "method": "bdev_malloc_create", 00:15:06.168 "params": { 00:15:06.168 "name": "malloc0", 00:15:06.168 "num_blocks": 8192, 00:15:06.168 "block_size": 4096, 00:15:06.168 "physical_block_size": 4096, 00:15:06.168 "uuid": "15023363-5bba-4857-a563-cbcc1e659191", 00:15:06.168 "optimal_io_boundary": 0, 00:15:06.168 "md_size": 0, 00:15:06.168 "dif_type": 0, 00:15:06.168 "dif_is_head_of_md": false, 00:15:06.168 "dif_pi_format": 0 00:15:06.168 } 00:15:06.168 }, 00:15:06.168 { 00:15:06.168 "method": "bdev_wait_for_examine" 00:15:06.168 } 00:15:06.168 ] 00:15:06.168 }, 00:15:06.168 { 00:15:06.168 "subsystem": "nbd", 00:15:06.168 "config": [] 00:15:06.168 }, 00:15:06.168 { 00:15:06.168 "subsystem": "scheduler", 00:15:06.168 "config": [ 00:15:06.168 { 00:15:06.168 "method": "framework_set_scheduler", 00:15:06.168 "params": { 00:15:06.168 "name": "static" 00:15:06.168 } 00:15:06.168 } 00:15:06.168 ] 00:15:06.168 }, 00:15:06.168 { 00:15:06.168 "subsystem": "nvmf", 00:15:06.168 "config": [ 00:15:06.168 { 00:15:06.168 "method": "nvmf_set_config", 00:15:06.168 "params": { 00:15:06.168 "discovery_filter": "match_any", 00:15:06.168 "admin_cmd_passthru": { 00:15:06.168 "identify_ctrlr": false 00:15:06.168 }, 00:15:06.168 "dhchap_digests": [ 00:15:06.168 "sha256", 00:15:06.168 "sha384", 00:15:06.168 "sha512" 00:15:06.168 ], 00:15:06.168 "dhchap_dhgroups": [ 00:15:06.168 "null", 00:15:06.168 "ffdhe2048", 00:15:06.168 "ffdhe3072", 00:15:06.168 "ffdhe4096", 00:15:06.168 "ffdhe6144", 00:15:06.168 "ffdhe8192" 00:15:06.168 ] 00:15:06.168 } 00:15:06.168 }, 00:15:06.168 { 00:15:06.168 "method": "nvmf_set_max_subsystems", 00:15:06.168 "params": { 00:15:06.168 "max_subsystems": 1024 00:15:06.168 } 00:15:06.168 }, 00:15:06.168 { 00:15:06.168 "method": "nvmf_set_crdt", 00:15:06.168 "params": { 00:15:06.168 "crdt1": 0, 00:15:06.168 "crdt2": 0, 00:15:06.168 "crdt3": 0 00:15:06.168 } 00:15:06.168 }, 00:15:06.168 { 00:15:06.168 "method": "nvmf_create_transport", 00:15:06.168 "params": { 00:15:06.168 "trtype": "TCP", 00:15:06.168 "max_queue_depth": 128, 00:15:06.168 "max_io_qpairs_per_ctrlr": 127, 00:15:06.168 "in_capsule_data_size": 4096, 00:15:06.168 "max_io_size": 131072, 00:15:06.168 "io_unit_size": 131072, 00:15:06.168 "max_aq_depth": 128, 00:15:06.168 "num_shared_buffers": 511, 00:15:06.168 "buf_cache_size": 4294967295, 00:15:06.168 "dif_insert_or_strip": false, 00:15:06.168 "zcopy": false, 00:15:06.168 "c2h_success": false, 00:15:06.168 "sock_priority": 0, 00:15:06.168 "abort_timeout_sec": 1, 00:15:06.168 "ack_timeout": 0, 00:15:06.168 "data_wr_pool_size": 0 00:15:06.168 } 00:15:06.168 }, 00:15:06.168 { 00:15:06.169 "method": "nvmf_create_subsystem", 00:15:06.169 "params": { 00:15:06.169 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:06.169 "allow_any_host": false, 00:15:06.169 "serial_number": "SPDK00000000000001", 00:15:06.169 "model_number": "SPDK bdev Controller", 00:15:06.169 "max_namespaces": 10, 00:15:06.169 "min_cntlid": 1, 00:15:06.169 "max_cntlid": 65519, 00:15:06.169 "ana_reporting": false 00:15:06.169 } 00:15:06.169 }, 00:15:06.169 { 00:15:06.169 "method": "nvmf_subsystem_add_host", 00:15:06.169 "params": { 00:15:06.169 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:06.169 "host": "nqn.2016-06.io.spdk:host1", 00:15:06.169 "psk": "key0" 00:15:06.169 } 00:15:06.169 }, 00:15:06.169 { 00:15:06.169 "method": "nvmf_subsystem_add_ns", 00:15:06.169 "params": { 00:15:06.169 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:06.169 "namespace": { 00:15:06.169 "nsid": 1, 00:15:06.169 "bdev_name": "malloc0", 00:15:06.169 "nguid": "150233635BBA4857A563CBCC1E659191", 00:15:06.169 "uuid": "15023363-5bba-4857-a563-cbcc1e659191", 00:15:06.169 "no_auto_visible": false 00:15:06.169 } 00:15:06.169 } 00:15:06.169 }, 00:15:06.169 { 00:15:06.169 "method": "nvmf_subsystem_add_listener", 00:15:06.169 "params": { 00:15:06.169 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:06.169 "listen_address": { 00:15:06.169 "trtype": "TCP", 00:15:06.169 "adrfam": "IPv4", 00:15:06.169 "traddr": "10.0.0.3", 00:15:06.169 "trsvcid": "4420" 00:15:06.169 }, 00:15:06.169 "secure_channel": true 00:15:06.169 } 00:15:06.169 } 00:15:06.169 ] 00:15:06.169 } 00:15:06.169 ] 00:15:06.169 }' 00:15:06.169 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:06.429 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:15:06.429 "subsystems": [ 00:15:06.429 { 00:15:06.429 "subsystem": "keyring", 00:15:06.429 "config": [ 00:15:06.429 { 00:15:06.429 "method": "keyring_file_add_key", 00:15:06.429 "params": { 00:15:06.429 "name": "key0", 00:15:06.429 "path": "/tmp/tmp.ZwyV5UGRwu" 00:15:06.429 } 00:15:06.429 } 00:15:06.429 ] 00:15:06.429 }, 00:15:06.429 { 00:15:06.429 "subsystem": "iobuf", 00:15:06.429 "config": [ 00:15:06.429 { 00:15:06.429 "method": "iobuf_set_options", 00:15:06.429 "params": { 00:15:06.429 "small_pool_count": 8192, 00:15:06.429 "large_pool_count": 1024, 00:15:06.429 "small_bufsize": 8192, 00:15:06.429 "large_bufsize": 135168 00:15:06.429 } 00:15:06.429 } 00:15:06.429 ] 00:15:06.429 }, 00:15:06.429 { 00:15:06.429 "subsystem": "sock", 00:15:06.429 "config": [ 00:15:06.429 { 00:15:06.429 "method": "sock_set_default_impl", 00:15:06.429 "params": { 00:15:06.429 "impl_name": "uring" 00:15:06.429 } 00:15:06.429 }, 00:15:06.429 { 00:15:06.429 "method": "sock_impl_set_options", 00:15:06.429 "params": { 00:15:06.429 "impl_name": "ssl", 00:15:06.429 "recv_buf_size": 4096, 00:15:06.429 "send_buf_size": 4096, 00:15:06.429 "enable_recv_pipe": true, 00:15:06.429 "enable_quickack": false, 00:15:06.429 "enable_placement_id": 0, 00:15:06.429 "enable_zerocopy_send_server": true, 00:15:06.429 "enable_zerocopy_send_client": false, 00:15:06.429 "zerocopy_threshold": 0, 00:15:06.429 "tls_version": 0, 00:15:06.429 "enable_ktls": false 00:15:06.429 } 00:15:06.429 }, 00:15:06.429 { 00:15:06.429 "method": "sock_impl_set_options", 00:15:06.429 "params": { 00:15:06.429 "impl_name": "posix", 00:15:06.429 "recv_buf_size": 2097152, 00:15:06.429 "send_buf_size": 2097152, 00:15:06.429 "enable_recv_pipe": true, 00:15:06.429 "enable_quickack": false, 00:15:06.429 "enable_placement_id": 0, 00:15:06.429 "enable_zerocopy_send_server": true, 00:15:06.429 "enable_zerocopy_send_client": false, 00:15:06.429 "zerocopy_threshold": 0, 00:15:06.429 "tls_version": 0, 00:15:06.429 "enable_ktls": false 00:15:06.429 } 00:15:06.429 }, 00:15:06.429 { 00:15:06.429 "method": "sock_impl_set_options", 00:15:06.429 "params": { 00:15:06.429 "impl_name": "uring", 00:15:06.429 "recv_buf_size": 2097152, 00:15:06.429 "send_buf_size": 2097152, 00:15:06.429 "enable_recv_pipe": true, 00:15:06.429 "enable_quickack": false, 00:15:06.429 "enable_placement_id": 0, 00:15:06.429 "enable_zerocopy_send_server": false, 00:15:06.429 "enable_zerocopy_send_client": false, 00:15:06.429 "zerocopy_threshold": 0, 00:15:06.430 "tls_version": 0, 00:15:06.430 "enable_ktls": false 00:15:06.430 } 00:15:06.430 } 00:15:06.430 ] 00:15:06.430 }, 00:15:06.430 { 00:15:06.430 "subsystem": "vmd", 00:15:06.430 "config": [] 00:15:06.430 }, 00:15:06.430 { 00:15:06.430 "subsystem": "accel", 00:15:06.430 "config": [ 00:15:06.430 { 00:15:06.430 "method": "accel_set_options", 00:15:06.430 "params": { 00:15:06.430 "small_cache_size": 128, 00:15:06.430 "large_cache_size": 16, 00:15:06.430 "task_count": 2048, 00:15:06.430 "sequence_count": 2048, 00:15:06.430 "buf_count": 2048 00:15:06.430 } 00:15:06.430 } 00:15:06.430 ] 00:15:06.430 }, 00:15:06.430 { 00:15:06.430 "subsystem": "bdev", 00:15:06.430 "config": [ 00:15:06.430 { 00:15:06.430 "method": "bdev_set_options", 00:15:06.430 "params": { 00:15:06.430 "bdev_io_pool_size": 65535, 00:15:06.430 "bdev_io_cache_size": 256, 00:15:06.430 "bdev_auto_examine": true, 00:15:06.430 "iobuf_small_cache_size": 128, 00:15:06.430 "iobuf_large_cache_size": 16 00:15:06.430 } 00:15:06.430 }, 00:15:06.430 { 00:15:06.430 "method": "bdev_raid_set_options", 00:15:06.430 "params": { 00:15:06.430 "process_window_size_kb": 1024, 00:15:06.430 "process_max_bandwidth_mb_sec": 0 00:15:06.430 } 00:15:06.430 }, 00:15:06.430 { 00:15:06.430 "method": "bdev_iscsi_set_options", 00:15:06.430 "params": { 00:15:06.430 "timeout_sec": 30 00:15:06.430 } 00:15:06.430 }, 00:15:06.430 { 00:15:06.430 "method": "bdev_nvme_set_options", 00:15:06.430 "params": { 00:15:06.430 "action_on_timeout": "none", 00:15:06.430 "timeout_us": 0, 00:15:06.430 "timeout_admin_us": 0, 00:15:06.430 "keep_alive_timeout_ms": 10000, 00:15:06.430 "arbitration_burst": 0, 00:15:06.430 "low_priority_weight": 0, 00:15:06.430 "medium_priority_weight": 0, 00:15:06.430 "high_priority_weight": 0, 00:15:06.430 "nvme_adminq_poll_period_us": 10000, 00:15:06.430 "nvme_ioq_poll_period_us": 0, 00:15:06.430 "io_queue_requests": 512, 00:15:06.430 "delay_cmd_submit": true, 00:15:06.430 "transport_retry_count": 4, 00:15:06.430 "bdev_retry_count": 3, 00:15:06.430 "transport_ack_timeout": 0, 00:15:06.430 "ctrlr_loss_timeout_sec": 0, 00:15:06.430 "reconnect_delay_sec": 0, 00:15:06.430 "fast_io_fail_timeout_sec": 0, 00:15:06.430 "disable_auto_failback": false, 00:15:06.430 "generate_uuids": false, 00:15:06.430 "transport_tos": 0, 00:15:06.430 "nvme_error_stat": false, 00:15:06.430 "rdma_srq_size": 0, 00:15:06.430 "io_path_stat": false, 00:15:06.430 "allow_accel_sequence": false, 00:15:06.430 "rdma_max_cq_size": 0, 00:15:06.430 "rdma_cm_event_timeout_ms": 0, 00:15:06.430 "dhchap_digests": [ 00:15:06.430 "sha256", 00:15:06.430 "sha384", 00:15:06.430 "sha512" 00:15:06.430 ], 00:15:06.430 "dhchap_dhgroups": [ 00:15:06.430 "null", 00:15:06.430 "ffdhe2048", 00:15:06.430 "ffdhe3072", 00:15:06.430 "ffdhe4096", 00:15:06.430 "ffdhe6144", 00:15:06.430 "ffdhe8192" 00:15:06.430 ] 00:15:06.430 } 00:15:06.430 }, 00:15:06.430 { 00:15:06.430 "method": "bdev_nvme_attach_controller", 00:15:06.430 "params": { 00:15:06.430 "name": "TLSTEST", 00:15:06.430 "trtype": "TCP", 00:15:06.430 "adrfam": "IPv4", 00:15:06.430 "traddr": "10.0.0.3", 00:15:06.430 "trsvcid": "4420", 00:15:06.430 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:06.430 "prchk_reftag": false, 00:15:06.430 "prchk_guard": false, 00:15:06.430 "ctrlr_loss_timeout_sec": 0, 00:15:06.430 "reconnect_delay_sec": 0, 00:15:06.430 "fast_io_fail_timeout_sec": 0, 00:15:06.430 "psk": "key0", 00:15:06.430 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:06.430 "hdgst": false, 00:15:06.430 "ddgst": false 00:15:06.430 } 00:15:06.430 }, 00:15:06.430 { 00:15:06.430 "method": "bdev_nvme_set_hotplug", 00:15:06.430 "params": { 00:15:06.430 "period_us": 100000, 00:15:06.430 "enable": false 00:15:06.430 } 00:15:06.430 }, 00:15:06.430 { 00:15:06.430 "method": "bdev_wait_for_examine" 00:15:06.430 } 00:15:06.430 ] 00:15:06.430 }, 00:15:06.430 { 00:15:06.430 "subsystem": "nbd", 00:15:06.430 "config": [] 00:15:06.430 } 00:15:06.430 ] 00:15:06.430 }' 00:15:06.430 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 84876 00:15:06.430 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84876 ']' 00:15:06.430 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84876 00:15:06.430 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:06.430 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:06.430 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84876 00:15:06.430 killing process with pid 84876 00:15:06.430 Received shutdown signal, test time was about 10.000000 seconds 00:15:06.430 00:15:06.430 Latency(us) 00:15:06.430 [2024-11-19T12:35:11.690Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:06.430 [2024-11-19T12:35:11.690Z] =================================================================================================================== 00:15:06.430 [2024-11-19T12:35:11.690Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:06.430 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:06.430 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:06.430 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84876' 00:15:06.430 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84876 00:15:06.430 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84876 00:15:06.430 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 84822 00:15:06.430 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84822 ']' 00:15:06.430 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84822 00:15:06.430 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:06.430 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:06.430 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84822 00:15:06.430 killing process with pid 84822 00:15:06.430 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:06.430 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:06.430 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84822' 00:15:06.430 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84822 00:15:06.430 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84822 00:15:06.690 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:15:06.690 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:06.690 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:06.690 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:15:06.690 "subsystems": [ 00:15:06.690 { 00:15:06.690 "subsystem": "keyring", 00:15:06.690 "config": [ 00:15:06.690 { 00:15:06.691 "method": "keyring_file_add_key", 00:15:06.691 "params": { 00:15:06.691 "name": "key0", 00:15:06.691 "path": "/tmp/tmp.ZwyV5UGRwu" 00:15:06.691 } 00:15:06.691 } 00:15:06.691 ] 00:15:06.691 }, 00:15:06.691 { 00:15:06.691 "subsystem": "iobuf", 00:15:06.691 "config": [ 00:15:06.691 { 00:15:06.691 "method": "iobuf_set_options", 00:15:06.691 "params": { 00:15:06.691 "small_pool_count": 8192, 00:15:06.691 "large_pool_count": 1024, 00:15:06.691 "small_bufsize": 8192, 00:15:06.691 "large_bufsize": 135168 00:15:06.691 } 00:15:06.691 } 00:15:06.691 ] 00:15:06.691 }, 00:15:06.691 { 00:15:06.691 "subsystem": "sock", 00:15:06.691 "config": [ 00:15:06.691 { 00:15:06.691 "method": "sock_set_default_impl", 00:15:06.691 "params": { 00:15:06.691 "impl_name": "uring" 00:15:06.691 } 00:15:06.691 }, 00:15:06.691 { 00:15:06.691 "method": "sock_impl_set_options", 00:15:06.691 "params": { 00:15:06.691 "impl_name": "ssl", 00:15:06.691 "recv_buf_size": 4096, 00:15:06.691 "send_buf_size": 4096, 00:15:06.691 "enable_recv_pipe": true, 00:15:06.691 "enable_quickack": false, 00:15:06.691 "enable_placement_id": 0, 00:15:06.691 "enable_zerocopy_send_server": true, 00:15:06.691 "enable_zerocopy_send_client": false, 00:15:06.691 "zerocopy_threshold": 0, 00:15:06.691 "tls_version": 0, 00:15:06.691 "enable_ktls": false 00:15:06.691 } 00:15:06.691 }, 00:15:06.691 { 00:15:06.691 "method": "sock_impl_set_options", 00:15:06.691 "params": { 00:15:06.691 "impl_name": "posix", 00:15:06.691 "recv_buf_size": 2097152, 00:15:06.691 "send_buf_size": 2097152, 00:15:06.691 "enable_recv_pipe": true, 00:15:06.691 "enable_quickack": false, 00:15:06.691 "enable_placement_id": 0, 00:15:06.691 "enable_zerocopy_send_server": true, 00:15:06.691 "enable_zerocopy_send_client": false, 00:15:06.691 "zerocopy_threshold": 0, 00:15:06.691 "tls_version": 0, 00:15:06.691 "enable_ktls": false 00:15:06.691 } 00:15:06.691 }, 00:15:06.691 { 00:15:06.691 "method": "sock_impl_set_options", 00:15:06.691 "params": { 00:15:06.691 "impl_name": "uring", 00:15:06.691 "recv_buf_size": 2097152, 00:15:06.691 "send_buf_size": 2097152, 00:15:06.691 "enable_recv_pipe": true, 00:15:06.691 "enable_quickack": false, 00:15:06.691 "enable_placement_id": 0, 00:15:06.691 "enable_zerocopy_send_server": false, 00:15:06.691 "enable_zerocopy_send_client": false, 00:15:06.691 "zerocopy_threshold": 0, 00:15:06.691 "tls_version": 0, 00:15:06.691 "enable_ktls": false 00:15:06.691 } 00:15:06.691 } 00:15:06.691 ] 00:15:06.691 }, 00:15:06.691 { 00:15:06.691 "subsystem": "vmd", 00:15:06.691 "config": [] 00:15:06.691 }, 00:15:06.691 { 00:15:06.691 "subsystem": "accel", 00:15:06.691 "config": [ 00:15:06.691 { 00:15:06.691 "method": "accel_set_options", 00:15:06.691 "params": { 00:15:06.691 "small_cache_size": 128, 00:15:06.691 "large_cache_size": 16, 00:15:06.691 "task_count": 2048, 00:15:06.691 "sequence_count": 2048, 00:15:06.691 "buf_count": 2048 00:15:06.691 } 00:15:06.691 } 00:15:06.691 ] 00:15:06.691 }, 00:15:06.691 { 00:15:06.691 "subsystem": "bdev", 00:15:06.691 "config": [ 00:15:06.691 { 00:15:06.691 "method": "bdev_set_options", 00:15:06.691 "params": { 00:15:06.691 "bdev_io_pool_size": 65535, 00:15:06.691 "bdev_io_cache_size": 256, 00:15:06.691 "bdev_auto_examine": true, 00:15:06.691 "iobuf_small_cache_size": 128, 00:15:06.691 "iobuf_large_cache_size": 16 00:15:06.691 } 00:15:06.691 }, 00:15:06.691 { 00:15:06.691 "method": "bdev_raid_set_options", 00:15:06.691 "params": { 00:15:06.691 "process_window_size_kb": 1024, 00:15:06.691 "process_max_bandwidth_mb_sec": 0 00:15:06.691 } 00:15:06.691 }, 00:15:06.691 { 00:15:06.691 "method": "bdev_iscsi_set_options", 00:15:06.691 "params": { 00:15:06.691 "timeout_sec": 30 00:15:06.691 } 00:15:06.691 }, 00:15:06.691 { 00:15:06.691 "method": "bdev_nvme_set_options", 00:15:06.691 "params": { 00:15:06.691 "action_on_timeout": "none", 00:15:06.691 "timeout_us": 0, 00:15:06.691 "timeout_admin_us": 0, 00:15:06.691 "keep_alive_timeout_ms": 10000, 00:15:06.691 "arbitration_burst": 0, 00:15:06.691 "low_priority_weight": 0, 00:15:06.691 "medium_priority_weight": 0, 00:15:06.691 "high_priority_weight": 0, 00:15:06.691 "nvme_adminq_poll_period_us": 10000, 00:15:06.691 "nvme_ioq_poll_period_us": 0, 00:15:06.691 "io_queue_requests": 0, 00:15:06.691 "delay_cmd_submit": true, 00:15:06.691 "transport_retry_count": 4, 00:15:06.691 "bdev_retry_count": 3, 00:15:06.691 "transport_ack_timeout": 0, 00:15:06.691 "ctrlr_loss_timeout_sec": 0, 00:15:06.691 "reconnect_delay_sec": 0, 00:15:06.691 "fast_io_fail_timeout_sec": 0, 00:15:06.691 "disable_auto_failback": false, 00:15:06.691 "generate_uuids": false, 00:15:06.691 "transport_tos": 0, 00:15:06.691 "nvme_error_stat": false, 00:15:06.691 "rdma_srq_size": 0, 00:15:06.691 "io_path_stat": false, 00:15:06.691 "allow_accel_sequence": false, 00:15:06.691 "rdma_max_cq_size": 0, 00:15:06.691 "rdma_cm_event_timeout_ms": 0, 00:15:06.691 "dhchap_digests": [ 00:15:06.691 "sha256", 00:15:06.691 "sha384", 00:15:06.691 "sha512" 00:15:06.691 ], 00:15:06.691 "dhchap_dhgroups": [ 00:15:06.691 "null", 00:15:06.691 "ffdhe2048", 00:15:06.691 "ffdhe3072", 00:15:06.691 "ffdhe4096", 00:15:06.691 "ffdhe6144", 00:15:06.691 "ffdhe8192" 00:15:06.691 ] 00:15:06.691 } 00:15:06.691 }, 00:15:06.691 { 00:15:06.691 "method": "bdev_nvme_set_hotplug", 00:15:06.691 "params": { 00:15:06.691 "period_us": 100000, 00:15:06.691 "enable": false 00:15:06.691 } 00:15:06.691 }, 00:15:06.691 { 00:15:06.691 "method": "bdev_malloc_create", 00:15:06.691 "params": { 00:15:06.691 "name": "malloc0", 00:15:06.691 "num_blocks": 8192, 00:15:06.691 "block_size": 4096, 00:15:06.691 "physical_block_size": 4096, 00:15:06.691 "uuid": "15023363-5bba-4857-a563-cbcc1e659191", 00:15:06.691 "optimal_io_boundary": 0, 00:15:06.691 "md_size": 0, 00:15:06.691 "dif_type": 0, 00:15:06.691 "dif_is_head_of_md": false, 00:15:06.691 "dif_pi_format": 0 00:15:06.691 } 00:15:06.691 }, 00:15:06.691 { 00:15:06.691 "method": "bdev_wait_for_examine" 00:15:06.691 } 00:15:06.691 ] 00:15:06.691 }, 00:15:06.691 { 00:15:06.691 "subsystem": "nbd", 00:15:06.691 "config": [] 00:15:06.691 }, 00:15:06.691 { 00:15:06.691 "subsystem": "scheduler", 00:15:06.691 "config": [ 00:15:06.691 { 00:15:06.691 "method": "framework_set_scheduler", 00:15:06.691 "params": { 00:15:06.691 "name": "static" 00:15:06.691 } 00:15:06.691 } 00:15:06.691 ] 00:15:06.691 }, 00:15:06.691 { 00:15:06.691 "subsystem": "nvmf", 00:15:06.691 "config": [ 00:15:06.691 { 00:15:06.691 "method": "nvmf_set_config", 00:15:06.691 "params": { 00:15:06.691 "discovery_filter": "match_any", 00:15:06.691 "admin_cmd_passthru": { 00:15:06.691 "identify_ctrlr": false 00:15:06.691 }, 00:15:06.691 "dhchap_digests": [ 00:15:06.691 "sha256", 00:15:06.691 "sha384", 00:15:06.691 "sha512" 00:15:06.691 ], 00:15:06.691 "dhchap_dhgroups": [ 00:15:06.691 "null", 00:15:06.691 "ffdhe2048", 00:15:06.691 "ffdhe3072", 00:15:06.691 "ffdhe4096", 00:15:06.691 "ffdhe6144", 00:15:06.691 "ffdhe8192" 00:15:06.691 ] 00:15:06.692 } 00:15:06.692 }, 00:15:06.692 { 00:15:06.692 "method": "nvmf_set_max_subsystems", 00:15:06.692 "params": { 00:15:06.692 "max_subsystems": 1024 00:15:06.692 } 00:15:06.692 }, 00:15:06.692 { 00:15:06.692 "method": "nvmf_set_crdt", 00:15:06.692 "params": { 00:15:06.692 "crdt1": 0, 00:15:06.692 "crdt2": 0, 00:15:06.692 "crdt3": 0 00:15:06.692 } 00:15:06.692 }, 00:15:06.692 { 00:15:06.692 "method": "nvmf_create_transport", 00:15:06.692 "params": { 00:15:06.692 "trtype": "TCP", 00:15:06.692 "max_queue_depth": 128, 00:15:06.692 "max_io_qpairs_per_ctrlr": 127, 00:15:06.692 "in_capsule_data_size": 4096, 00:15:06.692 "max_io_size": 131072, 00:15:06.692 "io_unit_size": 131072, 00:15:06.692 "max_aq_depth": 128, 00:15:06.692 "num_shared_buffers": 511, 00:15:06.692 "buf_cache_size": 4294967295, 00:15:06.692 "dif_insert_or_strip": false, 00:15:06.692 "zcopy": false, 00:15:06.692 "c2h_success": false, 00:15:06.692 "sock_priority": 0, 00:15:06.692 "abort_timeout_sec": 1, 00:15:06.692 "ack_timeout": 0, 00:15:06.692 "data_wr_pool_size": 0 00:15:06.692 } 00:15:06.692 }, 00:15:06.692 { 00:15:06.692 "method": "nvmf_create_subsystem", 00:15:06.692 "params": { 00:15:06.692 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:06.692 "allow_any_host": false, 00:15:06.692 "serial_number": "SPDK00000000000001", 00:15:06.692 "model_number": "SPDK bdev Controller", 00:15:06.692 "max_namespaces": 10, 00:15:06.692 "min_cntlid": 1, 00:15:06.692 "max_cntlid": 65519, 00:15:06.692 "ana_reporting": false 00:15:06.692 } 00:15:06.692 }, 00:15:06.692 { 00:15:06.692 "method": "nvmf_subsystem_add_host", 00:15:06.692 "params": { 00:15:06.692 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:06.692 "host": "nqn.2016-06.io.spdk:host1", 00:15:06.692 "psk": "key0" 00:15:06.692 } 00:15:06.692 }, 00:15:06.692 { 00:15:06.692 "method": "nvmf_subsystem_add_ns", 00:15:06.692 "params": { 00:15:06.692 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:06.692 "namespace": { 00:15:06.692 "nsid": 1, 00:15:06.692 "bdev_name": "malloc0", 00:15:06.692 "nguid": "150233635BBA4857A563CBCC1E659191", 00:15:06.692 "uuid": "15023363-5bba-4857-a563-cbcc1e659191", 00:15:06.692 "no_auto_visible": false 00:15:06.692 } 00:15:06.692 } 00:15:06.692 }, 00:15:06.692 { 00:15:06.692 "method": "nvmf_subsystem_add_listener", 00:15:06.692 "params": { 00:15:06.692 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:06.692 "listen_address": { 00:15:06.692 "trtype": "TCP", 00:15:06.692 "adrfam": "IPv4", 00:15:06.692 "traddr": "10.0.0.3", 00:15:06.692 "trsvcid": "4420" 00:15:06.692 }, 00:15:06.692 "secure_channel": true 00:15:06.692 } 00:15:06.692 } 00:15:06.692 ] 00:15:06.692 } 00:15:06.692 ] 00:15:06.692 }' 00:15:06.692 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:06.692 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=84914 00:15:06.692 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 84914 00:15:06.692 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84914 ']' 00:15:06.692 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:15:06.692 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.692 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:06.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.692 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.692 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:06.692 12:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:06.692 [2024-11-19 12:35:11.886270] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:06.692 [2024-11-19 12:35:11.886363] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:06.951 [2024-11-19 12:35:12.028473] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.951 [2024-11-19 12:35:12.065048] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:06.951 [2024-11-19 12:35:12.065109] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:06.951 [2024-11-19 12:35:12.065136] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:06.951 [2024-11-19 12:35:12.065158] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:06.951 [2024-11-19 12:35:12.065164] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:06.951 [2024-11-19 12:35:12.065227] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:07.209 [2024-11-19 12:35:12.208794] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:07.209 [2024-11-19 12:35:12.263443] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:07.209 [2024-11-19 12:35:12.304336] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:07.209 [2024-11-19 12:35:12.304548] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:07.777 12:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:07.777 12:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:07.777 12:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:07.777 12:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:07.777 12:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:07.777 12:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:07.777 12:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=84946 00:15:07.777 12:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 84946 /var/tmp/bdevperf.sock 00:15:07.777 12:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84946 ']' 00:15:07.777 12:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:15:07.777 12:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:07.777 12:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:07.777 12:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:15:07.777 "subsystems": [ 00:15:07.777 { 00:15:07.777 "subsystem": "keyring", 00:15:07.777 "config": [ 00:15:07.777 { 00:15:07.777 "method": "keyring_file_add_key", 00:15:07.777 "params": { 00:15:07.777 "name": "key0", 00:15:07.777 "path": "/tmp/tmp.ZwyV5UGRwu" 00:15:07.777 } 00:15:07.777 } 00:15:07.777 ] 00:15:07.777 }, 00:15:07.777 { 00:15:07.777 "subsystem": "iobuf", 00:15:07.777 "config": [ 00:15:07.777 { 00:15:07.777 "method": "iobuf_set_options", 00:15:07.777 "params": { 00:15:07.777 "small_pool_count": 8192, 00:15:07.777 "large_pool_count": 1024, 00:15:07.777 "small_bufsize": 8192, 00:15:07.777 "large_bufsize": 135168 00:15:07.777 } 00:15:07.777 } 00:15:07.777 ] 00:15:07.777 }, 00:15:07.777 { 00:15:07.777 "subsystem": "sock", 00:15:07.777 "config": [ 00:15:07.777 { 00:15:07.777 "method": "sock_set_default_impl", 00:15:07.777 "params": { 00:15:07.777 "impl_name": "uring" 00:15:07.777 } 00:15:07.777 }, 00:15:07.777 { 00:15:07.777 "method": "sock_impl_set_options", 00:15:07.777 "params": { 00:15:07.777 "impl_name": "ssl", 00:15:07.777 "recv_buf_size": 4096, 00:15:07.777 "send_buf_size": 4096, 00:15:07.777 "enable_recv_pipe": true, 00:15:07.777 "enable_quickack": false, 00:15:07.777 "enable_placement_id": 0, 00:15:07.777 "enable_zerocopy_send_server": true, 00:15:07.777 "enable_zerocopy_send_client": false, 00:15:07.777 "zerocopy_threshold": 0, 00:15:07.777 "tls_version": 0, 00:15:07.777 "enable_ktls": false 00:15:07.777 } 00:15:07.777 }, 00:15:07.777 { 00:15:07.777 "method": "sock_impl_set_options", 00:15:07.777 "params": { 00:15:07.777 "impl_name": "posix", 00:15:07.777 "recv_buf_size": 2097152, 00:15:07.777 "send_buf_size": 2097152, 00:15:07.777 "enable_recv_pipe": true, 00:15:07.777 "enable_quickack": false, 00:15:07.777 "enable_placement_id": 0, 00:15:07.777 "enable_zerocopy_send_server": true, 00:15:07.777 "enable_zerocopy_send_client": false, 00:15:07.777 "zerocopy_threshold": 0, 00:15:07.777 "tls_version": 0, 00:15:07.777 "enable_ktls": false 00:15:07.777 } 00:15:07.777 }, 00:15:07.777 { 00:15:07.777 "method": "sock_impl_set_options", 00:15:07.777 "params": { 00:15:07.777 "impl_name": "uring", 00:15:07.777 "recv_buf_size": 2097152, 00:15:07.777 "send_buf_size": 2097152, 00:15:07.777 "enable_recv_pipe": true, 00:15:07.777 "enable_quickack": false, 00:15:07.777 "enable_placement_id": 0, 00:15:07.777 "enable_zerocopy_send_server": false, 00:15:07.777 "enable_zerocopy_send_client": false, 00:15:07.777 "zerocopy_threshold": 0, 00:15:07.777 "tls_version": 0, 00:15:07.777 "enable_ktls": false 00:15:07.777 } 00:15:07.777 } 00:15:07.777 ] 00:15:07.777 }, 00:15:07.777 { 00:15:07.777 "subsystem": "vmd", 00:15:07.777 "config": [] 00:15:07.777 }, 00:15:07.777 { 00:15:07.777 "subsystem": "accel", 00:15:07.777 "config": [ 00:15:07.777 { 00:15:07.777 "method": "accel_set_options", 00:15:07.777 "params": { 00:15:07.777 "small_cache_size": 128, 00:15:07.777 "large_cache_size": 16, 00:15:07.777 "task_count": 2048, 00:15:07.777 "sequence_count": 2048, 00:15:07.777 "buf_count": 2048 00:15:07.777 } 00:15:07.777 } 00:15:07.777 ] 00:15:07.777 }, 00:15:07.777 { 00:15:07.777 "subsystem": "bdev", 00:15:07.777 "config": [ 00:15:07.777 { 00:15:07.777 "method": "bdev_set_options", 00:15:07.777 "params": { 00:15:07.777 "bdev_io_pool_size": 65535, 00:15:07.777 "bdev_io_cache_size": 256, 00:15:07.777 "bdev_auto_examine": true, 00:15:07.777 "iobuf_small_cache_size": 128, 00:15:07.777 "iobuf_large_cache_size": 16 00:15:07.777 } 00:15:07.777 }, 00:15:07.777 { 00:15:07.777 "method": "bdev_raid_set_options", 00:15:07.777 "params": { 00:15:07.777 "process_window_size_kb": 1024, 00:15:07.777 "process_max_bandwidth_mb_sec": 0 00:15:07.777 } 00:15:07.777 }, 00:15:07.777 { 00:15:07.777 "method": "bdev_iscsi_set_options", 00:15:07.777 "params": { 00:15:07.777 "timeout_sec": 30 00:15:07.777 } 00:15:07.777 }, 00:15:07.777 { 00:15:07.777 "method": "bdev_nvme_set_options", 00:15:07.777 "params": { 00:15:07.777 "action_on_timeout": "none", 00:15:07.777 "timeout_us": 0, 00:15:07.778 "timeout_admin_us": 0, 00:15:07.778 "keep_alive_timeout_ms": 10000, 00:15:07.778 "arbitration_burst": 0, 00:15:07.778 "low_priority_weight": 0, 00:15:07.778 "medium_priority_weight": 0, 00:15:07.778 "high_priority_weight": 0, 00:15:07.778 "nvme_adminq_poll_period_us": 10000, 00:15:07.778 "nvme_ioq_poll_period_us": 0, 00:15:07.778 "io_queue_requests": 512, 00:15:07.778 "delay_cmd_submit": true, 00:15:07.778 "transport_retry_count": 4, 00:15:07.778 "bdev_retry_count": 3, 00:15:07.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:07.778 12:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:07.778 12:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:07.778 "transport_ack_timeout": 0, 00:15:07.778 "ctrlr_loss_timeout_sec": 0, 00:15:07.778 "reconnect_delay_sec": 0, 00:15:07.778 "fast_io_fail_timeout_sec": 0, 00:15:07.778 "disable_auto_failback": false, 00:15:07.778 "generate_uuids": false, 00:15:07.778 "transport_tos": 0, 00:15:07.778 "nvme_error_stat": false, 00:15:07.778 "rdma_srq_size": 0, 00:15:07.778 "io_path_stat": false, 00:15:07.778 "allow_accel_sequence": false, 00:15:07.778 "rdma_max_cq_size": 0, 00:15:07.778 "rdma_cm_event_timeout_ms": 0, 00:15:07.778 "dhchap_digests": [ 00:15:07.778 "sha256", 00:15:07.778 "sha384", 00:15:07.778 "sha512" 00:15:07.778 ], 00:15:07.778 "dhchap_dhgroups": [ 00:15:07.778 "null", 00:15:07.778 "ffdhe2048", 00:15:07.778 "ffdhe3072", 00:15:07.778 "ffdhe4096", 00:15:07.778 "ffdhe6144", 00:15:07.778 "ffdhe8192" 00:15:07.778 ] 00:15:07.778 } 00:15:07.778 }, 00:15:07.778 { 00:15:07.778 "method": "bdev_nvme_attach_controller", 00:15:07.778 "params": { 00:15:07.778 "name": "TLSTEST", 00:15:07.778 "trtype": "TCP", 00:15:07.778 "adrfam": "IPv4", 00:15:07.778 "traddr": "10.0.0.3", 00:15:07.778 "trsvcid": "4420", 00:15:07.778 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:07.778 "prchk_reftag": false, 00:15:07.778 "prchk_guard": false, 00:15:07.778 "ctrlr_loss_timeout_sec": 0, 00:15:07.778 "reconnect_delay_sec": 0, 00:15:07.778 "fast_io_fail_timeout_sec": 0, 00:15:07.778 "psk": "key0", 00:15:07.778 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:07.778 "hdgst": false, 00:15:07.778 "ddgst": false 00:15:07.778 } 00:15:07.778 }, 00:15:07.778 { 00:15:07.778 "method": "bdev_nvme_set_hotplug", 00:15:07.778 "params": { 00:15:07.778 "period_us": 100000, 00:15:07.778 "enable": false 00:15:07.778 } 00:15:07.778 }, 00:15:07.778 { 00:15:07.778 "method": "bdev_wait_for_examine" 00:15:07.778 } 00:15:07.778 ] 00:15:07.778 }, 00:15:07.778 { 00:15:07.778 "subsystem": "nbd", 00:15:07.778 "config": [] 00:15:07.778 } 00:15:07.778 ] 00:15:07.778 }' 00:15:07.778 12:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:07.778 [2024-11-19 12:35:12.898098] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:07.778 [2024-11-19 12:35:12.898389] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84946 ] 00:15:08.036 [2024-11-19 12:35:13.040557] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.036 [2024-11-19 12:35:13.085730] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:08.036 [2024-11-19 12:35:13.202489] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:08.036 [2024-11-19 12:35:13.234730] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:08.971 12:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:08.971 12:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:08.971 12:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:08.971 Running I/O for 10 seconds... 00:15:11.286 4303.00 IOPS, 16.81 MiB/s [2024-11-19T12:35:17.484Z] 4290.00 IOPS, 16.76 MiB/s [2024-11-19T12:35:18.422Z] 4255.33 IOPS, 16.62 MiB/s [2024-11-19T12:35:19.362Z] 4254.25 IOPS, 16.62 MiB/s [2024-11-19T12:35:20.301Z] 4300.80 IOPS, 16.80 MiB/s [2024-11-19T12:35:21.238Z] 4305.00 IOPS, 16.82 MiB/s [2024-11-19T12:35:22.237Z] 4286.14 IOPS, 16.74 MiB/s [2024-11-19T12:35:23.172Z] 4246.62 IOPS, 16.59 MiB/s [2024-11-19T12:35:24.547Z] 4074.00 IOPS, 15.91 MiB/s [2024-11-19T12:35:24.548Z] 4071.00 IOPS, 15.90 MiB/s 00:15:19.288 Latency(us) 00:15:19.288 [2024-11-19T12:35:24.548Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.288 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:19.288 Verification LBA range: start 0x0 length 0x2000 00:15:19.288 TLSTESTn1 : 10.02 4076.65 15.92 0.00 0.00 31340.32 6047.19 261190.75 00:15:19.288 [2024-11-19T12:35:24.548Z] =================================================================================================================== 00:15:19.288 [2024-11-19T12:35:24.548Z] Total : 4076.65 15.92 0.00 0.00 31340.32 6047.19 261190.75 00:15:19.288 { 00:15:19.288 "results": [ 00:15:19.288 { 00:15:19.288 "job": "TLSTESTn1", 00:15:19.288 "core_mask": "0x4", 00:15:19.288 "workload": "verify", 00:15:19.288 "status": "finished", 00:15:19.288 "verify_range": { 00:15:19.288 "start": 0, 00:15:19.288 "length": 8192 00:15:19.288 }, 00:15:19.288 "queue_depth": 128, 00:15:19.288 "io_size": 4096, 00:15:19.288 "runtime": 10.017296, 00:15:19.288 "iops": 4076.6490278414453, 00:15:19.288 "mibps": 15.924410265005646, 00:15:19.288 "io_failed": 0, 00:15:19.288 "io_timeout": 0, 00:15:19.288 "avg_latency_us": 31340.32150914834, 00:15:19.288 "min_latency_us": 6047.185454545454, 00:15:19.288 "max_latency_us": 261190.74909090908 00:15:19.288 } 00:15:19.288 ], 00:15:19.288 "core_count": 1 00:15:19.288 } 00:15:19.288 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:19.288 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 84946 00:15:19.288 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84946 ']' 00:15:19.288 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84946 00:15:19.288 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:19.288 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:19.288 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84946 00:15:19.288 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:19.288 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:19.288 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84946' 00:15:19.288 killing process with pid 84946 00:15:19.288 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84946 00:15:19.288 Received shutdown signal, test time was about 10.000000 seconds 00:15:19.288 00:15:19.288 Latency(us) 00:15:19.288 [2024-11-19T12:35:24.548Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.288 [2024-11-19T12:35:24.548Z] =================================================================================================================== 00:15:19.288 [2024-11-19T12:35:24.548Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:19.288 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84946 00:15:19.288 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 84914 00:15:19.288 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84914 ']' 00:15:19.288 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84914 00:15:19.288 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:19.288 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:19.288 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84914 00:15:19.288 killing process with pid 84914 00:15:19.288 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:19.288 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:19.288 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84914' 00:15:19.288 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84914 00:15:19.288 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84914 00:15:19.548 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:15:19.548 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:19.548 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:19.548 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:19.548 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=85085 00:15:19.548 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:19.548 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 85085 00:15:19.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.548 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 85085 ']' 00:15:19.548 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.548 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:19.548 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.548 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:19.548 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:19.548 [2024-11-19 12:35:24.619321] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:19.548 [2024-11-19 12:35:24.619431] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:19.548 [2024-11-19 12:35:24.762259] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.548 [2024-11-19 12:35:24.802939] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:19.548 [2024-11-19 12:35:24.803172] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:19.548 [2024-11-19 12:35:24.803338] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:19.548 [2024-11-19 12:35:24.803357] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:19.548 [2024-11-19 12:35:24.803366] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:19.548 [2024-11-19 12:35:24.803407] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.807 [2024-11-19 12:35:24.836216] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:19.807 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:19.807 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:19.807 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:19.807 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:19.807 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:19.807 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:19.807 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.ZwyV5UGRwu 00:15:19.807 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ZwyV5UGRwu 00:15:19.807 12:35:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:20.066 [2024-11-19 12:35:25.142642] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:20.066 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:20.325 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:20.584 [2024-11-19 12:35:25.682882] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:20.584 [2024-11-19 12:35:25.683153] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:20.584 12:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:20.842 malloc0 00:15:20.842 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:21.100 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ZwyV5UGRwu 00:15:21.359 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:21.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:21.926 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=85133 00:15:21.926 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:21.926 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:21.926 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 85133 /var/tmp/bdevperf.sock 00:15:21.926 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 85133 ']' 00:15:21.926 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:21.926 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:21.926 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:21.926 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:21.926 12:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:21.926 [2024-11-19 12:35:26.950367] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:21.926 [2024-11-19 12:35:26.950476] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85133 ] 00:15:21.926 [2024-11-19 12:35:27.089501] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.926 [2024-11-19 12:35:27.132450] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:21.926 [2024-11-19 12:35:27.167099] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:22.184 12:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:22.184 12:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:22.184 12:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZwyV5UGRwu 00:15:22.443 12:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:22.701 [2024-11-19 12:35:27.841847] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:22.701 nvme0n1 00:15:22.701 12:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:22.959 Running I/O for 1 seconds... 00:15:23.894 3357.00 IOPS, 13.11 MiB/s 00:15:23.894 Latency(us) 00:15:23.894 [2024-11-19T12:35:29.154Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:23.894 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:23.894 Verification LBA range: start 0x0 length 0x2000 00:15:23.894 nvme0n1 : 1.03 3392.92 13.25 0.00 0.00 37220.40 8519.68 37415.10 00:15:23.894 [2024-11-19T12:35:29.154Z] =================================================================================================================== 00:15:23.894 [2024-11-19T12:35:29.154Z] Total : 3392.92 13.25 0.00 0.00 37220.40 8519.68 37415.10 00:15:23.894 { 00:15:23.894 "results": [ 00:15:23.894 { 00:15:23.894 "job": "nvme0n1", 00:15:23.894 "core_mask": "0x2", 00:15:23.894 "workload": "verify", 00:15:23.894 "status": "finished", 00:15:23.894 "verify_range": { 00:15:23.894 "start": 0, 00:15:23.894 "length": 8192 00:15:23.894 }, 00:15:23.894 "queue_depth": 128, 00:15:23.894 "io_size": 4096, 00:15:23.894 "runtime": 1.02714, 00:15:23.894 "iops": 3392.916252896392, 00:15:23.894 "mibps": 13.253579112876531, 00:15:23.894 "io_failed": 0, 00:15:23.894 "io_timeout": 0, 00:15:23.894 "avg_latency_us": 37220.39634172428, 00:15:23.894 "min_latency_us": 8519.68, 00:15:23.894 "max_latency_us": 37415.09818181818 00:15:23.894 } 00:15:23.894 ], 00:15:23.894 "core_count": 1 00:15:23.894 } 00:15:23.894 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 85133 00:15:23.894 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 85133 ']' 00:15:23.894 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 85133 00:15:23.894 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:23.894 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:23.894 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85133 00:15:23.894 killing process with pid 85133 00:15:23.894 Received shutdown signal, test time was about 1.000000 seconds 00:15:23.894 00:15:23.894 Latency(us) 00:15:23.894 [2024-11-19T12:35:29.154Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:23.894 [2024-11-19T12:35:29.154Z] =================================================================================================================== 00:15:23.894 [2024-11-19T12:35:29.154Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:23.894 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:23.894 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:23.894 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85133' 00:15:23.894 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 85133 00:15:23.894 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 85133 00:15:24.153 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 85085 00:15:24.153 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 85085 ']' 00:15:24.153 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 85085 00:15:24.153 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:24.153 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:24.153 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85085 00:15:24.153 killing process with pid 85085 00:15:24.153 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:24.153 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:24.153 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85085' 00:15:24.153 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 85085 00:15:24.153 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 85085 00:15:24.411 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:15:24.411 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:24.411 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:24.411 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:24.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.411 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=85181 00:15:24.411 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:24.411 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 85181 00:15:24.411 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 85181 ']' 00:15:24.411 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.411 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:24.411 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.411 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:24.411 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:24.411 [2024-11-19 12:35:29.570398] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:24.411 [2024-11-19 12:35:29.570497] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:24.669 [2024-11-19 12:35:29.711550] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.669 [2024-11-19 12:35:29.753088] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:24.669 [2024-11-19 12:35:29.753162] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:24.669 [2024-11-19 12:35:29.753185] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:24.669 [2024-11-19 12:35:29.753196] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:24.669 [2024-11-19 12:35:29.753205] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:24.669 [2024-11-19 12:35:29.753236] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.669 [2024-11-19 12:35:29.787965] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:24.669 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:24.669 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:24.669 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:24.669 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:24.669 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:24.669 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:24.669 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:15:24.669 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.669 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:24.669 [2024-11-19 12:35:29.903490] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:24.927 malloc0 00:15:24.927 [2024-11-19 12:35:29.939062] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:24.927 [2024-11-19 12:35:29.939337] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:24.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:24.927 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.927 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=85201 00:15:24.927 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:24.927 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 85201 /var/tmp/bdevperf.sock 00:15:24.927 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 85201 ']' 00:15:24.927 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:24.927 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:24.927 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:24.927 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:24.927 12:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:24.927 [2024-11-19 12:35:30.040639] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:24.927 [2024-11-19 12:35:30.041062] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85201 ] 00:15:25.185 [2024-11-19 12:35:30.187486] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.185 [2024-11-19 12:35:30.231044] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:25.185 [2024-11-19 12:35:30.265408] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:25.185 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:25.185 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:25.185 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZwyV5UGRwu 00:15:25.444 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:25.703 [2024-11-19 12:35:30.776660] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:25.703 nvme0n1 00:15:25.703 12:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:25.962 Running I/O for 1 seconds... 00:15:26.899 4238.00 IOPS, 16.55 MiB/s 00:15:26.899 Latency(us) 00:15:26.899 [2024-11-19T12:35:32.159Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:26.899 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:26.899 Verification LBA range: start 0x0 length 0x2000 00:15:26.899 nvme0n1 : 1.02 4287.53 16.75 0.00 0.00 29481.57 1616.06 21567.30 00:15:26.899 [2024-11-19T12:35:32.159Z] =================================================================================================================== 00:15:26.899 [2024-11-19T12:35:32.159Z] Total : 4287.53 16.75 0.00 0.00 29481.57 1616.06 21567.30 00:15:26.899 { 00:15:26.899 "results": [ 00:15:26.899 { 00:15:26.899 "job": "nvme0n1", 00:15:26.899 "core_mask": "0x2", 00:15:26.899 "workload": "verify", 00:15:26.899 "status": "finished", 00:15:26.899 "verify_range": { 00:15:26.899 "start": 0, 00:15:26.899 "length": 8192 00:15:26.899 }, 00:15:26.899 "queue_depth": 128, 00:15:26.899 "io_size": 4096, 00:15:26.899 "runtime": 1.018534, 00:15:26.899 "iops": 4287.534829470592, 00:15:26.899 "mibps": 16.7481829276195, 00:15:26.899 "io_failed": 0, 00:15:26.899 "io_timeout": 0, 00:15:26.899 "avg_latency_us": 29481.574992609865, 00:15:26.899 "min_latency_us": 1616.0581818181818, 00:15:26.899 "max_latency_us": 21567.30181818182 00:15:26.899 } 00:15:26.899 ], 00:15:26.899 "core_count": 1 00:15:26.899 } 00:15:26.899 12:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:15:26.899 12:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.899 12:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:26.899 12:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.899 12:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:15:26.899 "subsystems": [ 00:15:26.899 { 00:15:26.899 "subsystem": "keyring", 00:15:26.899 "config": [ 00:15:26.899 { 00:15:26.899 "method": "keyring_file_add_key", 00:15:26.899 "params": { 00:15:26.899 "name": "key0", 00:15:26.899 "path": "/tmp/tmp.ZwyV5UGRwu" 00:15:26.899 } 00:15:26.899 } 00:15:26.899 ] 00:15:26.899 }, 00:15:26.899 { 00:15:26.900 "subsystem": "iobuf", 00:15:26.900 "config": [ 00:15:26.900 { 00:15:26.900 "method": "iobuf_set_options", 00:15:26.900 "params": { 00:15:26.900 "small_pool_count": 8192, 00:15:26.900 "large_pool_count": 1024, 00:15:26.900 "small_bufsize": 8192, 00:15:26.900 "large_bufsize": 135168 00:15:26.900 } 00:15:26.900 } 00:15:26.900 ] 00:15:26.900 }, 00:15:26.900 { 00:15:26.900 "subsystem": "sock", 00:15:26.900 "config": [ 00:15:26.900 { 00:15:26.900 "method": "sock_set_default_impl", 00:15:26.900 "params": { 00:15:26.900 "impl_name": "uring" 00:15:26.900 } 00:15:26.900 }, 00:15:26.900 { 00:15:26.900 "method": "sock_impl_set_options", 00:15:26.900 "params": { 00:15:26.900 "impl_name": "ssl", 00:15:26.900 "recv_buf_size": 4096, 00:15:26.900 "send_buf_size": 4096, 00:15:26.900 "enable_recv_pipe": true, 00:15:26.900 "enable_quickack": false, 00:15:26.900 "enable_placement_id": 0, 00:15:26.900 "enable_zerocopy_send_server": true, 00:15:26.900 "enable_zerocopy_send_client": false, 00:15:26.900 "zerocopy_threshold": 0, 00:15:26.900 "tls_version": 0, 00:15:26.900 "enable_ktls": false 00:15:26.900 } 00:15:26.900 }, 00:15:26.900 { 00:15:26.900 "method": "sock_impl_set_options", 00:15:26.900 "params": { 00:15:26.900 "impl_name": "posix", 00:15:26.900 "recv_buf_size": 2097152, 00:15:26.900 "send_buf_size": 2097152, 00:15:26.900 "enable_recv_pipe": true, 00:15:26.900 "enable_quickack": false, 00:15:26.900 "enable_placement_id": 0, 00:15:26.900 "enable_zerocopy_send_server": true, 00:15:26.900 "enable_zerocopy_send_client": false, 00:15:26.900 "zerocopy_threshold": 0, 00:15:26.900 "tls_version": 0, 00:15:26.900 "enable_ktls": false 00:15:26.900 } 00:15:26.900 }, 00:15:26.900 { 00:15:26.900 "method": "sock_impl_set_options", 00:15:26.900 "params": { 00:15:26.900 "impl_name": "uring", 00:15:26.900 "recv_buf_size": 2097152, 00:15:26.900 "send_buf_size": 2097152, 00:15:26.900 "enable_recv_pipe": true, 00:15:26.900 "enable_quickack": false, 00:15:26.900 "enable_placement_id": 0, 00:15:26.900 "enable_zerocopy_send_server": false, 00:15:26.900 "enable_zerocopy_send_client": false, 00:15:26.900 "zerocopy_threshold": 0, 00:15:26.900 "tls_version": 0, 00:15:26.900 "enable_ktls": false 00:15:26.900 } 00:15:26.900 } 00:15:26.900 ] 00:15:26.900 }, 00:15:26.900 { 00:15:26.900 "subsystem": "vmd", 00:15:26.900 "config": [] 00:15:26.900 }, 00:15:26.900 { 00:15:26.900 "subsystem": "accel", 00:15:26.900 "config": [ 00:15:26.900 { 00:15:26.900 "method": "accel_set_options", 00:15:26.900 "params": { 00:15:26.900 "small_cache_size": 128, 00:15:26.900 "large_cache_size": 16, 00:15:26.900 "task_count": 2048, 00:15:26.900 "sequence_count": 2048, 00:15:26.900 "buf_count": 2048 00:15:26.900 } 00:15:26.900 } 00:15:26.900 ] 00:15:26.900 }, 00:15:26.900 { 00:15:26.900 "subsystem": "bdev", 00:15:26.900 "config": [ 00:15:26.900 { 00:15:26.900 "method": "bdev_set_options", 00:15:26.900 "params": { 00:15:26.900 "bdev_io_pool_size": 65535, 00:15:26.900 "bdev_io_cache_size": 256, 00:15:26.900 "bdev_auto_examine": true, 00:15:26.900 "iobuf_small_cache_size": 128, 00:15:26.900 "iobuf_large_cache_size": 16 00:15:26.900 } 00:15:26.900 }, 00:15:26.900 { 00:15:26.900 "method": "bdev_raid_set_options", 00:15:26.900 "params": { 00:15:26.900 "process_window_size_kb": 1024, 00:15:26.900 "process_max_bandwidth_mb_sec": 0 00:15:26.900 } 00:15:26.900 }, 00:15:26.900 { 00:15:26.900 "method": "bdev_iscsi_set_options", 00:15:26.900 "params": { 00:15:26.900 "timeout_sec": 30 00:15:26.900 } 00:15:26.900 }, 00:15:26.900 { 00:15:26.900 "method": "bdev_nvme_set_options", 00:15:26.900 "params": { 00:15:26.900 "action_on_timeout": "none", 00:15:26.900 "timeout_us": 0, 00:15:26.900 "timeout_admin_us": 0, 00:15:26.900 "keep_alive_timeout_ms": 10000, 00:15:26.900 "arbitration_burst": 0, 00:15:26.900 "low_priority_weight": 0, 00:15:26.900 "medium_priority_weight": 0, 00:15:26.900 "high_priority_weight": 0, 00:15:26.900 "nvme_adminq_poll_period_us": 10000, 00:15:26.900 "nvme_ioq_poll_period_us": 0, 00:15:26.900 "io_queue_requests": 0, 00:15:26.900 "delay_cmd_submit": true, 00:15:26.900 "transport_retry_count": 4, 00:15:26.900 "bdev_retry_count": 3, 00:15:26.900 "transport_ack_timeout": 0, 00:15:26.900 "ctrlr_loss_timeout_sec": 0, 00:15:26.900 "reconnect_delay_sec": 0, 00:15:26.900 "fast_io_fail_timeout_sec": 0, 00:15:26.900 "disable_auto_failback": false, 00:15:26.900 "generate_uuids": false, 00:15:26.900 "transport_tos": 0, 00:15:26.900 "nvme_error_stat": false, 00:15:26.900 "rdma_srq_size": 0, 00:15:26.900 "io_path_stat": false, 00:15:26.900 "allow_accel_sequence": false, 00:15:26.900 "rdma_max_cq_size": 0, 00:15:26.900 "rdma_cm_event_timeout_ms": 0, 00:15:26.900 "dhchap_digests": [ 00:15:26.900 "sha256", 00:15:26.900 "sha384", 00:15:26.900 "sha512" 00:15:26.900 ], 00:15:26.900 "dhchap_dhgroups": [ 00:15:26.900 "null", 00:15:26.900 "ffdhe2048", 00:15:26.900 "ffdhe3072", 00:15:26.900 "ffdhe4096", 00:15:26.900 "ffdhe6144", 00:15:26.900 "ffdhe8192" 00:15:26.900 ] 00:15:26.900 } 00:15:26.900 }, 00:15:26.900 { 00:15:26.900 "method": "bdev_nvme_set_hotplug", 00:15:26.900 "params": { 00:15:26.900 "period_us": 100000, 00:15:26.900 "enable": false 00:15:26.900 } 00:15:26.900 }, 00:15:26.900 { 00:15:26.900 "method": "bdev_malloc_create", 00:15:26.900 "params": { 00:15:26.900 "name": "malloc0", 00:15:26.900 "num_blocks": 8192, 00:15:26.900 "block_size": 4096, 00:15:26.900 "physical_block_size": 4096, 00:15:26.900 "uuid": "c1c49b36-1b22-4205-88d8-87824c7d6dc7", 00:15:26.900 "optimal_io_boundary": 0, 00:15:26.900 "md_size": 0, 00:15:26.900 "dif_type": 0, 00:15:26.900 "dif_is_head_of_md": false, 00:15:26.900 "dif_pi_format": 0 00:15:26.900 } 00:15:26.900 }, 00:15:26.900 { 00:15:26.900 "method": "bdev_wait_for_examine" 00:15:26.900 } 00:15:26.900 ] 00:15:26.900 }, 00:15:26.900 { 00:15:26.900 "subsystem": "nbd", 00:15:26.900 "config": [] 00:15:26.900 }, 00:15:26.900 { 00:15:26.900 "subsystem": "scheduler", 00:15:26.900 "config": [ 00:15:26.900 { 00:15:26.900 "method": "framework_set_scheduler", 00:15:26.900 "params": { 00:15:26.900 "name": "static" 00:15:26.900 } 00:15:26.900 } 00:15:26.900 ] 00:15:26.900 }, 00:15:26.900 { 00:15:26.900 "subsystem": "nvmf", 00:15:26.900 "config": [ 00:15:26.900 { 00:15:26.900 "method": "nvmf_set_config", 00:15:26.900 "params": { 00:15:26.900 "discovery_filter": "match_any", 00:15:26.900 "admin_cmd_passthru": { 00:15:26.900 "identify_ctrlr": false 00:15:26.900 }, 00:15:26.900 "dhchap_digests": [ 00:15:26.900 "sha256", 00:15:26.900 "sha384", 00:15:26.900 "sha512" 00:15:26.900 ], 00:15:26.900 "dhchap_dhgroups": [ 00:15:26.900 "null", 00:15:26.900 "ffdhe2048", 00:15:26.900 "ffdhe3072", 00:15:26.900 "ffdhe4096", 00:15:26.900 "ffdhe6144", 00:15:26.900 "ffdhe8192" 00:15:26.900 ] 00:15:26.900 } 00:15:26.900 }, 00:15:26.900 { 00:15:26.900 "method": "nvmf_set_max_subsystems", 00:15:26.900 "params": { 00:15:26.900 "max_subsystems": 1024 00:15:26.900 } 00:15:26.900 }, 00:15:26.900 { 00:15:26.900 "method": "nvmf_set_crdt", 00:15:26.900 "params": { 00:15:26.900 "crdt1": 0, 00:15:26.900 "crdt2": 0, 00:15:26.900 "crdt3": 0 00:15:26.900 } 00:15:26.900 }, 00:15:26.900 { 00:15:26.900 "method": "nvmf_create_transport", 00:15:26.900 "params": { 00:15:26.900 "trtype": "TCP", 00:15:26.900 "max_queue_depth": 128, 00:15:26.900 "max_io_qpairs_per_ctrlr": 127, 00:15:26.900 "in_capsule_data_size": 4096, 00:15:26.900 "max_io_size": 131072, 00:15:26.900 "io_unit_size": 131072, 00:15:26.900 "max_aq_depth": 128, 00:15:26.900 "num_shared_buffers": 511, 00:15:26.900 "buf_cache_size": 4294967295, 00:15:26.900 "dif_insert_or_strip": false, 00:15:26.900 "zcopy": false, 00:15:26.900 "c2h_success": false, 00:15:26.900 "sock_priority": 0, 00:15:26.900 "abort_timeout_sec": 1, 00:15:26.900 "ack_timeout": 0, 00:15:26.900 "data_wr_pool_size": 0 00:15:26.900 } 00:15:26.900 }, 00:15:26.900 { 00:15:26.900 "method": "nvmf_create_subsystem", 00:15:26.900 "params": { 00:15:26.900 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:26.900 "allow_any_host": false, 00:15:26.900 "serial_number": "00000000000000000000", 00:15:26.900 "model_number": "SPDK bdev Controller", 00:15:26.900 "max_namespaces": 32, 00:15:26.900 "min_cntlid": 1, 00:15:26.900 "max_cntlid": 65519, 00:15:26.900 "ana_reporting": false 00:15:26.900 } 00:15:26.900 }, 00:15:26.900 { 00:15:26.900 "method": "nvmf_subsystem_add_host", 00:15:26.900 "params": { 00:15:26.900 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:26.900 "host": "nqn.2016-06.io.spdk:host1", 00:15:26.900 "psk": "key0" 00:15:26.900 } 00:15:26.900 }, 00:15:26.901 { 00:15:26.901 "method": "nvmf_subsystem_add_ns", 00:15:26.901 "params": { 00:15:26.901 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:26.901 "namespace": { 00:15:26.901 "nsid": 1, 00:15:26.901 "bdev_name": "malloc0", 00:15:26.901 "nguid": "C1C49B361B22420588D887824C7D6DC7", 00:15:26.901 "uuid": "c1c49b36-1b22-4205-88d8-87824c7d6dc7", 00:15:26.901 "no_auto_visible": false 00:15:26.901 } 00:15:26.901 } 00:15:26.901 }, 00:15:26.901 { 00:15:26.901 "method": "nvmf_subsystem_add_listener", 00:15:26.901 "params": { 00:15:26.901 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:26.901 "listen_address": { 00:15:26.901 "trtype": "TCP", 00:15:26.901 "adrfam": "IPv4", 00:15:26.901 "traddr": "10.0.0.3", 00:15:26.901 "trsvcid": "4420" 00:15:26.901 }, 00:15:26.901 "secure_channel": false, 00:15:26.901 "sock_impl": "ssl" 00:15:26.901 } 00:15:26.901 } 00:15:26.901 ] 00:15:26.901 } 00:15:26.901 ] 00:15:26.901 }' 00:15:26.901 12:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:27.469 12:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:15:27.469 "subsystems": [ 00:15:27.469 { 00:15:27.469 "subsystem": "keyring", 00:15:27.469 "config": [ 00:15:27.469 { 00:15:27.469 "method": "keyring_file_add_key", 00:15:27.469 "params": { 00:15:27.469 "name": "key0", 00:15:27.469 "path": "/tmp/tmp.ZwyV5UGRwu" 00:15:27.469 } 00:15:27.469 } 00:15:27.469 ] 00:15:27.469 }, 00:15:27.469 { 00:15:27.469 "subsystem": "iobuf", 00:15:27.469 "config": [ 00:15:27.470 { 00:15:27.470 "method": "iobuf_set_options", 00:15:27.470 "params": { 00:15:27.470 "small_pool_count": 8192, 00:15:27.470 "large_pool_count": 1024, 00:15:27.470 "small_bufsize": 8192, 00:15:27.470 "large_bufsize": 135168 00:15:27.470 } 00:15:27.470 } 00:15:27.470 ] 00:15:27.470 }, 00:15:27.470 { 00:15:27.470 "subsystem": "sock", 00:15:27.470 "config": [ 00:15:27.470 { 00:15:27.470 "method": "sock_set_default_impl", 00:15:27.470 "params": { 00:15:27.470 "impl_name": "uring" 00:15:27.470 } 00:15:27.470 }, 00:15:27.470 { 00:15:27.470 "method": "sock_impl_set_options", 00:15:27.470 "params": { 00:15:27.470 "impl_name": "ssl", 00:15:27.470 "recv_buf_size": 4096, 00:15:27.470 "send_buf_size": 4096, 00:15:27.470 "enable_recv_pipe": true, 00:15:27.470 "enable_quickack": false, 00:15:27.470 "enable_placement_id": 0, 00:15:27.470 "enable_zerocopy_send_server": true, 00:15:27.470 "enable_zerocopy_send_client": false, 00:15:27.470 "zerocopy_threshold": 0, 00:15:27.470 "tls_version": 0, 00:15:27.470 "enable_ktls": false 00:15:27.470 } 00:15:27.470 }, 00:15:27.470 { 00:15:27.470 "method": "sock_impl_set_options", 00:15:27.470 "params": { 00:15:27.470 "impl_name": "posix", 00:15:27.470 "recv_buf_size": 2097152, 00:15:27.470 "send_buf_size": 2097152, 00:15:27.470 "enable_recv_pipe": true, 00:15:27.470 "enable_quickack": false, 00:15:27.470 "enable_placement_id": 0, 00:15:27.470 "enable_zerocopy_send_server": true, 00:15:27.470 "enable_zerocopy_send_client": false, 00:15:27.470 "zerocopy_threshold": 0, 00:15:27.470 "tls_version": 0, 00:15:27.470 "enable_ktls": false 00:15:27.470 } 00:15:27.470 }, 00:15:27.470 { 00:15:27.470 "method": "sock_impl_set_options", 00:15:27.470 "params": { 00:15:27.470 "impl_name": "uring", 00:15:27.470 "recv_buf_size": 2097152, 00:15:27.470 "send_buf_size": 2097152, 00:15:27.470 "enable_recv_pipe": true, 00:15:27.470 "enable_quickack": false, 00:15:27.470 "enable_placement_id": 0, 00:15:27.470 "enable_zerocopy_send_server": false, 00:15:27.470 "enable_zerocopy_send_client": false, 00:15:27.470 "zerocopy_threshold": 0, 00:15:27.470 "tls_version": 0, 00:15:27.470 "enable_ktls": false 00:15:27.470 } 00:15:27.470 } 00:15:27.470 ] 00:15:27.470 }, 00:15:27.470 { 00:15:27.470 "subsystem": "vmd", 00:15:27.470 "config": [] 00:15:27.470 }, 00:15:27.470 { 00:15:27.470 "subsystem": "accel", 00:15:27.470 "config": [ 00:15:27.470 { 00:15:27.470 "method": "accel_set_options", 00:15:27.470 "params": { 00:15:27.470 "small_cache_size": 128, 00:15:27.470 "large_cache_size": 16, 00:15:27.470 "task_count": 2048, 00:15:27.470 "sequence_count": 2048, 00:15:27.470 "buf_count": 2048 00:15:27.470 } 00:15:27.470 } 00:15:27.470 ] 00:15:27.470 }, 00:15:27.470 { 00:15:27.470 "subsystem": "bdev", 00:15:27.470 "config": [ 00:15:27.470 { 00:15:27.470 "method": "bdev_set_options", 00:15:27.470 "params": { 00:15:27.470 "bdev_io_pool_size": 65535, 00:15:27.470 "bdev_io_cache_size": 256, 00:15:27.470 "bdev_auto_examine": true, 00:15:27.470 "iobuf_small_cache_size": 128, 00:15:27.470 "iobuf_large_cache_size": 16 00:15:27.470 } 00:15:27.470 }, 00:15:27.470 { 00:15:27.470 "method": "bdev_raid_set_options", 00:15:27.470 "params": { 00:15:27.470 "process_window_size_kb": 1024, 00:15:27.470 "process_max_bandwidth_mb_sec": 0 00:15:27.470 } 00:15:27.470 }, 00:15:27.470 { 00:15:27.470 "method": "bdev_iscsi_set_options", 00:15:27.470 "params": { 00:15:27.470 "timeout_sec": 30 00:15:27.470 } 00:15:27.470 }, 00:15:27.470 { 00:15:27.470 "method": "bdev_nvme_set_options", 00:15:27.470 "params": { 00:15:27.470 "action_on_timeout": "none", 00:15:27.470 "timeout_us": 0, 00:15:27.470 "timeout_admin_us": 0, 00:15:27.470 "keep_alive_timeout_ms": 10000, 00:15:27.470 "arbitration_burst": 0, 00:15:27.470 "low_priority_weight": 0, 00:15:27.470 "medium_priority_weight": 0, 00:15:27.470 "high_priority_weight": 0, 00:15:27.470 "nvme_adminq_poll_period_us": 10000, 00:15:27.470 "nvme_ioq_poll_period_us": 0, 00:15:27.470 "io_queue_requests": 512, 00:15:27.470 "delay_cmd_submit": true, 00:15:27.470 "transport_retry_count": 4, 00:15:27.470 "bdev_retry_count": 3, 00:15:27.470 "transport_ack_timeout": 0, 00:15:27.470 "ctrlr_loss_timeout_sec": 0, 00:15:27.470 "reconnect_delay_sec": 0, 00:15:27.470 "fast_io_fail_timeout_sec": 0, 00:15:27.470 "disable_auto_failback": false, 00:15:27.470 "generate_uuids": false, 00:15:27.470 "transport_tos": 0, 00:15:27.470 "nvme_error_stat": false, 00:15:27.470 "rdma_srq_size": 0, 00:15:27.470 "io_path_stat": false, 00:15:27.470 "allow_accel_sequence": false, 00:15:27.470 "rdma_max_cq_size": 0, 00:15:27.470 "rdma_cm_event_timeout_ms": 0, 00:15:27.470 "dhchap_digests": [ 00:15:27.470 "sha256", 00:15:27.470 "sha384", 00:15:27.470 "sha512" 00:15:27.470 ], 00:15:27.470 "dhchap_dhgroups": [ 00:15:27.470 "null", 00:15:27.470 "ffdhe2048", 00:15:27.470 "ffdhe3072", 00:15:27.470 "ffdhe4096", 00:15:27.470 "ffdhe6144", 00:15:27.470 "ffdhe8192" 00:15:27.470 ] 00:15:27.470 } 00:15:27.470 }, 00:15:27.470 { 00:15:27.470 "method": "bdev_nvme_attach_controller", 00:15:27.470 "params": { 00:15:27.470 "name": "nvme0", 00:15:27.470 "trtype": "TCP", 00:15:27.470 "adrfam": "IPv4", 00:15:27.470 "traddr": "10.0.0.3", 00:15:27.470 "trsvcid": "4420", 00:15:27.470 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:27.470 "prchk_reftag": false, 00:15:27.470 "prchk_guard": false, 00:15:27.470 "ctrlr_loss_timeout_sec": 0, 00:15:27.470 "reconnect_delay_sec": 0, 00:15:27.470 "fast_io_fail_timeout_sec": 0, 00:15:27.470 "psk": "key0", 00:15:27.470 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:27.470 "hdgst": false, 00:15:27.470 "ddgst": false 00:15:27.470 } 00:15:27.470 }, 00:15:27.470 { 00:15:27.470 "method": "bdev_nvme_set_hotplug", 00:15:27.470 "params": { 00:15:27.470 "period_us": 100000, 00:15:27.470 "enable": false 00:15:27.470 } 00:15:27.470 }, 00:15:27.470 { 00:15:27.470 "method": "bdev_enable_histogram", 00:15:27.470 "params": { 00:15:27.470 "name": "nvme0n1", 00:15:27.470 "enable": true 00:15:27.470 } 00:15:27.470 }, 00:15:27.470 { 00:15:27.470 "method": "bdev_wait_for_examine" 00:15:27.470 } 00:15:27.470 ] 00:15:27.470 }, 00:15:27.470 { 00:15:27.470 "subsystem": "nbd", 00:15:27.470 "config": [] 00:15:27.470 } 00:15:27.470 ] 00:15:27.470 }' 00:15:27.470 12:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 85201 00:15:27.470 12:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 85201 ']' 00:15:27.470 12:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 85201 00:15:27.470 12:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:27.470 12:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:27.470 12:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85201 00:15:27.470 killing process with pid 85201 00:15:27.470 Received shutdown signal, test time was about 1.000000 seconds 00:15:27.470 00:15:27.470 Latency(us) 00:15:27.470 [2024-11-19T12:35:32.730Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.470 [2024-11-19T12:35:32.730Z] =================================================================================================================== 00:15:27.471 [2024-11-19T12:35:32.731Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:27.471 12:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:27.471 12:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:27.471 12:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85201' 00:15:27.471 12:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 85201 00:15:27.471 12:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 85201 00:15:27.471 12:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 85181 00:15:27.471 12:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 85181 ']' 00:15:27.471 12:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 85181 00:15:27.471 12:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:27.471 12:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:27.471 12:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85181 00:15:27.471 killing process with pid 85181 00:15:27.471 12:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:27.471 12:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:27.471 12:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85181' 00:15:27.471 12:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 85181 00:15:27.471 12:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 85181 00:15:27.730 12:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:15:27.730 12:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:15:27.730 "subsystems": [ 00:15:27.730 { 00:15:27.730 "subsystem": "keyring", 00:15:27.730 "config": [ 00:15:27.730 { 00:15:27.730 "method": "keyring_file_add_key", 00:15:27.730 "params": { 00:15:27.730 "name": "key0", 00:15:27.730 "path": "/tmp/tmp.ZwyV5UGRwu" 00:15:27.730 } 00:15:27.730 } 00:15:27.730 ] 00:15:27.730 }, 00:15:27.730 { 00:15:27.730 "subsystem": "iobuf", 00:15:27.730 "config": [ 00:15:27.730 { 00:15:27.730 "method": "iobuf_set_options", 00:15:27.730 "params": { 00:15:27.730 "small_pool_count": 8192, 00:15:27.730 "large_pool_count": 1024, 00:15:27.730 "small_bufsize": 8192, 00:15:27.730 "large_bufsize": 135168 00:15:27.730 } 00:15:27.730 } 00:15:27.730 ] 00:15:27.730 }, 00:15:27.730 { 00:15:27.730 "subsystem": "sock", 00:15:27.730 "config": [ 00:15:27.730 { 00:15:27.730 "method": "sock_set_default_impl", 00:15:27.730 "params": { 00:15:27.730 "impl_name": "uring" 00:15:27.730 } 00:15:27.730 }, 00:15:27.730 { 00:15:27.730 "method": "sock_impl_set_options", 00:15:27.730 "params": { 00:15:27.730 "impl_name": "ssl", 00:15:27.730 "recv_buf_size": 4096, 00:15:27.730 "send_buf_size": 4096, 00:15:27.730 "enable_recv_pipe": true, 00:15:27.730 "enable_quickack": false, 00:15:27.730 "enable_placement_id": 0, 00:15:27.730 "enable_zerocopy_send_server": true, 00:15:27.730 "enable_zerocopy_send_client": false, 00:15:27.730 "zerocopy_threshold": 0, 00:15:27.730 "tls_version": 0, 00:15:27.730 "enable_ktls": false 00:15:27.730 } 00:15:27.730 }, 00:15:27.730 { 00:15:27.730 "method": "sock_impl_set_options", 00:15:27.730 "params": { 00:15:27.730 "impl_name": "posix", 00:15:27.730 "recv_buf_size": 2097152, 00:15:27.730 "send_buf_size": 2097152, 00:15:27.730 "enable_recv_pipe": true, 00:15:27.730 "enable_quickack": false, 00:15:27.730 "enable_placement_id": 0, 00:15:27.730 "enable_zerocopy_send_server": true, 00:15:27.730 "enable_zerocopy_send_client": false, 00:15:27.730 "zerocopy_threshold": 0, 00:15:27.730 "tls_version": 0, 00:15:27.730 "enable_ktls": false 00:15:27.730 } 00:15:27.730 }, 00:15:27.730 { 00:15:27.730 "method": "sock_impl_set_options", 00:15:27.730 "params": { 00:15:27.730 "impl_name": "uring", 00:15:27.730 "recv_buf_size": 2097152, 00:15:27.730 "send_buf_size": 2097152, 00:15:27.731 "enable_recv_pipe": true, 00:15:27.731 "enable_quickack": false, 00:15:27.731 "enable_placement_id": 0, 00:15:27.731 "enable_zerocopy_send_server": false, 00:15:27.731 "enable_zerocopy_send_client": false, 00:15:27.731 "zerocopy_threshold": 0, 00:15:27.731 "tls_version": 0, 00:15:27.731 "enable_ktls": false 00:15:27.731 } 00:15:27.731 } 00:15:27.731 ] 00:15:27.731 }, 00:15:27.731 { 00:15:27.731 "subsystem": "vmd", 00:15:27.731 "config": [] 00:15:27.731 }, 00:15:27.731 { 00:15:27.731 "subsystem": "accel", 00:15:27.731 "config": [ 00:15:27.731 { 00:15:27.731 "method": "accel_set_options", 00:15:27.731 "params": { 00:15:27.731 "small_cache_size": 128, 00:15:27.731 "large_cache_size": 16, 00:15:27.731 "task_count": 2048, 00:15:27.731 "sequence_count": 2048, 00:15:27.731 "buf_count": 2048 00:15:27.731 } 00:15:27.731 } 00:15:27.731 ] 00:15:27.731 }, 00:15:27.731 { 00:15:27.731 "subsystem": "bdev", 00:15:27.731 "config": [ 00:15:27.731 { 00:15:27.731 "method": "bdev_set_options", 00:15:27.731 "params": { 00:15:27.731 "bdev_io_pool_size": 65535, 00:15:27.731 "bdev_io_cache_size": 256, 00:15:27.731 "bdev_auto_examine": true, 00:15:27.731 "iobuf_small_cache_size": 128, 00:15:27.731 "iobuf_large_cache_size": 16 00:15:27.731 } 00:15:27.731 }, 00:15:27.731 { 00:15:27.731 "method": "bdev_raid_set_options", 00:15:27.731 "params": { 00:15:27.731 "process_window_size_kb": 1024, 00:15:27.731 "process_max_bandwidth_mb_sec": 0 00:15:27.731 } 00:15:27.731 }, 00:15:27.731 { 00:15:27.731 "method": "bdev_iscsi_set_options", 00:15:27.731 "params": { 00:15:27.731 "timeout_sec": 30 00:15:27.731 } 00:15:27.731 }, 00:15:27.731 { 00:15:27.731 "method": "bdev_nvme_set_options", 00:15:27.731 "params": { 00:15:27.731 "action_on_timeout": "none", 00:15:27.731 "timeout_us": 0, 00:15:27.731 "timeout_admin_us": 0, 00:15:27.731 "keep_alive_timeout_ms": 10000, 00:15:27.731 "arbitration_burst": 0, 00:15:27.731 "low_priority_weight": 0, 00:15:27.731 "medium_priority_weight": 0, 00:15:27.731 "high_priority_weight": 0, 00:15:27.731 "nvme_adminq_poll_period_us": 10000, 00:15:27.731 "nvme_ioq_poll_period_us": 0, 00:15:27.731 "io_queue_requests": 0, 00:15:27.731 "delay_cmd_submit": true, 00:15:27.731 "transport_retry_count": 4, 00:15:27.731 "bdev_retry_count": 3, 00:15:27.731 "transport_ack_timeout": 0, 00:15:27.731 "ctrlr_loss_timeout_sec": 0, 00:15:27.731 "reconnect_delay_sec": 0, 00:15:27.731 "fast_io_fail_timeout_sec": 0, 00:15:27.731 "disable_auto_failback": false, 00:15:27.731 "generate_uuids": false, 00:15:27.731 "transport_tos": 0, 00:15:27.731 "nvme_error_stat": false, 00:15:27.731 "rdma_srq_size": 0, 00:15:27.731 "io_path_stat": false, 00:15:27.731 "allow_accel_sequence": false, 00:15:27.731 "rdma_max_cq_size": 0, 00:15:27.731 "rdma_cm_event_timeout_ms": 0, 00:15:27.731 "dhchap_digests": [ 00:15:27.731 "sha256", 00:15:27.731 "sha384", 00:15:27.731 "sha512" 00:15:27.731 ], 00:15:27.731 "dhchap_dhgroups": [ 00:15:27.731 "null", 00:15:27.731 "ffdhe2048", 00:15:27.731 "ffdhe3072", 00:15:27.731 "ffdhe4096", 00:15:27.731 "ffdhe6144", 00:15:27.731 "ffdhe8192" 00:15:27.731 ] 00:15:27.731 } 00:15:27.731 }, 00:15:27.731 { 00:15:27.731 "method": "bdev_nvme_set_hotplug", 00:15:27.731 "params": { 00:15:27.731 "period_us": 100000, 00:15:27.731 "enable": false 00:15:27.731 } 00:15:27.731 }, 00:15:27.731 { 00:15:27.731 "method": "bdev_malloc_create", 00:15:27.731 "params": { 00:15:27.731 "name": "malloc0", 00:15:27.731 "num_blocks": 8192, 00:15:27.731 "block_size": 4096, 00:15:27.731 "physical_block_size": 4096, 00:15:27.731 "uuid": "c1c49b36-1b22-4205-88d8-87824c7d6dc7", 00:15:27.731 "optimal_io_boundary": 0, 00:15:27.731 "md_size": 0, 00:15:27.731 "dif_type": 0, 00:15:27.731 "dif_is_head_of_md": false, 00:15:27.731 "dif_pi_format": 0 00:15:27.731 } 00:15:27.731 }, 00:15:27.731 { 00:15:27.731 "method": "bdev_wait_for_examine" 00:15:27.731 } 00:15:27.731 ] 00:15:27.731 }, 00:15:27.731 { 00:15:27.731 "subsystem": "nbd", 00:15:27.731 "config": [] 00:15:27.731 }, 00:15:27.731 { 00:15:27.731 "subsystem": "scheduler", 00:15:27.731 "config": [ 00:15:27.731 { 00:15:27.731 "method": "framework_set_scheduler", 00:15:27.731 "params": { 00:15:27.731 "name": "static" 00:15:27.731 } 00:15:27.731 } 00:15:27.731 ] 00:15:27.731 }, 00:15:27.731 { 00:15:27.731 "subsystem": "nvmf", 00:15:27.731 "config": [ 00:15:27.731 { 00:15:27.731 "method": "nvmf_set_config", 00:15:27.731 "params": { 00:15:27.731 "discovery_filter": "match_any", 00:15:27.731 "admin_cmd_passthru": { 00:15:27.731 "identify_ctrlr": false 00:15:27.731 }, 00:15:27.731 "dhchap_digests": [ 00:15:27.731 "sha256", 00:15:27.731 "sha384", 00:15:27.731 "sha512" 00:15:27.731 ], 00:15:27.731 "dhchap_dhgroups": [ 00:15:27.731 "null", 00:15:27.731 "ffdhe2048", 00:15:27.731 "ffdhe3072", 00:15:27.731 "ffdhe4096", 00:15:27.731 "ffdhe6144", 00:15:27.731 "ffdhe8192" 00:15:27.731 ] 00:15:27.731 } 00:15:27.731 }, 00:15:27.731 { 00:15:27.731 "method": "nvmf_set_max_subsystems", 00:15:27.731 "params": { 00:15:27.731 "max_subsystems": 1024 00:15:27.731 } 00:15:27.731 }, 00:15:27.731 { 00:15:27.731 "method": "nvmf_set_crdt", 00:15:27.731 "params": { 00:15:27.731 "crdt1": 0, 00:15:27.731 "crdt2": 0, 00:15:27.731 "crdt3": 0 00:15:27.731 } 00:15:27.731 }, 00:15:27.731 { 00:15:27.731 "method": "nvmf_create_transport", 00:15:27.731 "params": { 00:15:27.731 "trtype": "TCP", 00:15:27.731 "max_queue_depth": 128, 00:15:27.731 "max_io_qpairs_per_ctrlr": 127, 00:15:27.731 "in_capsule_data_size": 4096, 00:15:27.731 "max_io_size": 131072, 00:15:27.731 "io_unit_size": 131072, 00:15:27.731 "max_aq_depth": 128, 00:15:27.731 "num_shared_buffers": 511, 00:15:27.731 "buf_cache_size": 4294967295, 00:15:27.731 "dif_insert_or_strip": false, 00:15:27.731 "zcopy": false, 00:15:27.731 "c2h_success": false, 00:15:27.731 "sock_priority": 0, 00:15:27.731 "abort_timeout_sec": 1, 00:15:27.731 "ack_timeout": 0, 00:15:27.731 "data_wr_pool_size": 0 00:15:27.731 } 00:15:27.731 }, 00:15:27.731 { 00:15:27.731 "method": "nvmf_create_subsystem", 00:15:27.731 "params": { 00:15:27.731 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:27.731 "allow_any_host": false, 00:15:27.731 "serial_number": "00000000000000000000", 00:15:27.731 "model_number": "SPDK bdev Controller", 00:15:27.731 "max_namespaces": 32, 00:15:27.731 "min_cntlid": 1, 00:15:27.731 "max_cntlid": 65519, 00:15:27.731 "ana_reporting": false 00:15:27.731 } 00:15:27.731 }, 00:15:27.731 { 00:15:27.731 "method": "nvmf_subsystem_add_host", 00:15:27.731 "params": { 00:15:27.731 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:27.731 "host": "nqn.2016-06.io.spdk:host1", 00:15:27.731 "psk": "key0" 00:15:27.731 } 00:15:27.731 }, 00:15:27.731 { 00:15:27.731 "method": "nvmf_subsystem_add_ns", 00:15:27.731 "params": { 00:15:27.731 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:27.731 "namespace": { 00:15:27.731 "nsid": 1, 00:15:27.731 "bdev_name": "malloc0", 00:15:27.731 "nguid": "C1C49B361B22420588D887824C7D6DC7", 00:15:27.731 "uuid": "c1c49b36-1b22-4205-88d8-87824c7d6dc7", 00:15:27.731 "no_auto_visible": false 00:15:27.731 } 00:15:27.731 } 00:15:27.731 }, 00:15:27.731 { 00:15:27.731 "method": "nvmf_subsystem_add_listener", 00:15:27.731 "params": { 00:15:27.731 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:27.731 "listen_address": { 00:15:27.731 "trtype": "TCP", 00:15:27.731 "adrfam": "IPv4", 00:15:27.731 "traddr": "10.0.0.3", 00:15:27.731 "trsvcid": "4420" 00:15:27.731 }, 00:15:27.731 "secure_channel": false, 00:15:27.731 "sock_impl": "ssl" 00:15:27.731 } 00:15:27.731 } 00:15:27.731 ] 00:15:27.731 } 00:15:27.731 ] 00:15:27.731 }' 00:15:27.731 12:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:27.731 12:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:27.731 12:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:27.731 12:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=85253 00:15:27.731 12:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:15:27.731 12:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 85253 00:15:27.731 12:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 85253 ']' 00:15:27.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:27.731 12:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:27.731 12:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:27.731 12:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:27.731 12:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:27.731 12:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:27.731 [2024-11-19 12:35:32.894824] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:27.731 [2024-11-19 12:35:32.894938] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:27.990 [2024-11-19 12:35:33.045030] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.990 [2024-11-19 12:35:33.078387] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:27.990 [2024-11-19 12:35:33.078656] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:27.990 [2024-11-19 12:35:33.078702] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:27.990 [2024-11-19 12:35:33.078712] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:27.990 [2024-11-19 12:35:33.078719] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:27.990 [2024-11-19 12:35:33.078807] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.990 [2024-11-19 12:35:33.219345] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:28.249 [2024-11-19 12:35:33.272935] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:28.249 [2024-11-19 12:35:33.311732] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:28.249 [2024-11-19 12:35:33.311928] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:28.818 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:28.818 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:28.818 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:28.818 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:28.818 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:28.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:28.818 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:28.818 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=85281 00:15:28.818 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 85281 /var/tmp/bdevperf.sock 00:15:28.818 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 85281 ']' 00:15:28.818 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:28.818 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:28.818 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:28.818 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:28.818 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:28.818 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:15:28.818 12:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:15:28.818 "subsystems": [ 00:15:28.818 { 00:15:28.818 "subsystem": "keyring", 00:15:28.818 "config": [ 00:15:28.818 { 00:15:28.818 "method": "keyring_file_add_key", 00:15:28.818 "params": { 00:15:28.818 "name": "key0", 00:15:28.818 "path": "/tmp/tmp.ZwyV5UGRwu" 00:15:28.818 } 00:15:28.819 } 00:15:28.819 ] 00:15:28.819 }, 00:15:28.819 { 00:15:28.819 "subsystem": "iobuf", 00:15:28.819 "config": [ 00:15:28.819 { 00:15:28.819 "method": "iobuf_set_options", 00:15:28.819 "params": { 00:15:28.819 "small_pool_count": 8192, 00:15:28.819 "large_pool_count": 1024, 00:15:28.819 "small_bufsize": 8192, 00:15:28.819 "large_bufsize": 135168 00:15:28.819 } 00:15:28.819 } 00:15:28.819 ] 00:15:28.819 }, 00:15:28.819 { 00:15:28.819 "subsystem": "sock", 00:15:28.819 "config": [ 00:15:28.819 { 00:15:28.819 "method": "sock_set_default_impl", 00:15:28.819 "params": { 00:15:28.819 "impl_name": "uring" 00:15:28.819 } 00:15:28.819 }, 00:15:28.819 { 00:15:28.819 "method": "sock_impl_set_options", 00:15:28.819 "params": { 00:15:28.819 "impl_name": "ssl", 00:15:28.819 "recv_buf_size": 4096, 00:15:28.819 "send_buf_size": 4096, 00:15:28.819 "enable_recv_pipe": true, 00:15:28.819 "enable_quickack": false, 00:15:28.819 "enable_placement_id": 0, 00:15:28.819 "enable_zerocopy_send_server": true, 00:15:28.819 "enable_zerocopy_send_client": false, 00:15:28.819 "zerocopy_threshold": 0, 00:15:28.819 "tls_version": 0, 00:15:28.819 "enable_ktls": false 00:15:28.819 } 00:15:28.819 }, 00:15:28.819 { 00:15:28.819 "method": "sock_impl_set_options", 00:15:28.819 "params": { 00:15:28.819 "impl_name": "posix", 00:15:28.819 "recv_buf_size": 2097152, 00:15:28.819 "send_buf_size": 2097152, 00:15:28.819 "enable_recv_pipe": true, 00:15:28.819 "enable_quickack": false, 00:15:28.819 "enable_placement_id": 0, 00:15:28.819 "enable_zerocopy_send_server": true, 00:15:28.819 "enable_zerocopy_send_client": false, 00:15:28.819 "zerocopy_threshold": 0, 00:15:28.819 "tls_version": 0, 00:15:28.819 "enable_ktls": false 00:15:28.819 } 00:15:28.819 }, 00:15:28.819 { 00:15:28.819 "method": "sock_impl_set_options", 00:15:28.819 "params": { 00:15:28.819 "impl_name": "uring", 00:15:28.819 "recv_buf_size": 2097152, 00:15:28.819 "send_buf_size": 2097152, 00:15:28.819 "enable_recv_pipe": true, 00:15:28.819 "enable_quickack": false, 00:15:28.819 "enable_placement_id": 0, 00:15:28.819 "enable_zerocopy_send_server": false, 00:15:28.819 "enable_zerocopy_send_client": false, 00:15:28.819 "zerocopy_threshold": 0, 00:15:28.819 "tls_version": 0, 00:15:28.819 "enable_ktls": false 00:15:28.819 } 00:15:28.819 } 00:15:28.819 ] 00:15:28.819 }, 00:15:28.819 { 00:15:28.819 "subsystem": "vmd", 00:15:28.819 "config": [] 00:15:28.819 }, 00:15:28.819 { 00:15:28.819 "subsystem": "accel", 00:15:28.819 "config": [ 00:15:28.819 { 00:15:28.819 "method": "accel_set_options", 00:15:28.819 "params": { 00:15:28.819 "small_cache_size": 128, 00:15:28.819 "large_cache_size": 16, 00:15:28.819 "task_count": 2048, 00:15:28.819 "sequence_count": 2048, 00:15:28.819 "buf_count": 2048 00:15:28.819 } 00:15:28.819 } 00:15:28.819 ] 00:15:28.819 }, 00:15:28.819 { 00:15:28.819 "subsystem": "bdev", 00:15:28.819 "config": [ 00:15:28.819 { 00:15:28.819 "method": "bdev_set_options", 00:15:28.819 "params": { 00:15:28.819 "bdev_io_pool_size": 65535, 00:15:28.819 "bdev_io_cache_size": 256, 00:15:28.819 "bdev_auto_examine": true, 00:15:28.819 "iobuf_small_cache_size": 128, 00:15:28.819 "iobuf_large_cache_size": 16 00:15:28.819 } 00:15:28.819 }, 00:15:28.819 { 00:15:28.819 "method": "bdev_raid_set_options", 00:15:28.819 "params": { 00:15:28.819 "process_window_size_kb": 1024, 00:15:28.819 "process_max_bandwidth_mb_sec": 0 00:15:28.819 } 00:15:28.819 }, 00:15:28.819 { 00:15:28.819 "method": "bdev_iscsi_set_options", 00:15:28.819 "params": { 00:15:28.819 "timeout_sec": 30 00:15:28.819 } 00:15:28.819 }, 00:15:28.819 { 00:15:28.819 "method": "bdev_nvme_set_options", 00:15:28.819 "params": { 00:15:28.819 "action_on_timeout": "none", 00:15:28.819 "timeout_us": 0, 00:15:28.819 "timeout_admin_us": 0, 00:15:28.819 "keep_alive_timeout_ms": 10000, 00:15:28.819 "arbitration_burst": 0, 00:15:28.819 "low_priority_weight": 0, 00:15:28.819 "medium_priority_weight": 0, 00:15:28.819 "high_priority_weight": 0, 00:15:28.819 "nvme_adminq_poll_period_us": 10000, 00:15:28.819 "nvme_ioq_poll_period_us": 0, 00:15:28.819 "io_queue_requests": 512, 00:15:28.819 "delay_cmd_submit": true, 00:15:28.819 "transport_retry_count": 4, 00:15:28.819 "bdev_retry_count": 3, 00:15:28.819 "transport_ack_timeout": 0, 00:15:28.819 "ctrlr_loss_timeout_sec": 0, 00:15:28.819 "reconnect_delay_sec": 0, 00:15:28.819 "fast_io_fail_timeout_sec": 0, 00:15:28.819 "disable_auto_failback": false, 00:15:28.819 "generate_uuids": false, 00:15:28.819 "transport_tos": 0, 00:15:28.819 "nvme_error_stat": false, 00:15:28.819 "rdma_srq_size": 0, 00:15:28.819 "io_path_stat": false, 00:15:28.819 "allow_accel_sequence": false, 00:15:28.819 "rdma_max_cq_size": 0, 00:15:28.819 "rdma_cm_event_timeout_ms": 0, 00:15:28.819 "dhchap_digests": [ 00:15:28.819 "sha256", 00:15:28.819 "sha384", 00:15:28.819 "sha512" 00:15:28.819 ], 00:15:28.819 "dhchap_dhgroups": [ 00:15:28.819 "null", 00:15:28.819 "ffdhe2048", 00:15:28.819 "ffdhe3072", 00:15:28.819 "ffdhe4096", 00:15:28.819 "ffdhe6144", 00:15:28.819 "ffdhe8192" 00:15:28.819 ] 00:15:28.819 } 00:15:28.819 }, 00:15:28.819 { 00:15:28.819 "method": "bdev_nvme_attach_controller", 00:15:28.819 "params": { 00:15:28.819 "name": "nvme0", 00:15:28.819 "trtype": "TCP", 00:15:28.819 "adrfam": "IPv4", 00:15:28.819 "traddr": "10.0.0.3", 00:15:28.819 "trsvcid": "4420", 00:15:28.819 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:28.819 "prchk_reftag": false, 00:15:28.819 "prchk_guard": false, 00:15:28.819 "ctrlr_loss_timeout_sec": 0, 00:15:28.819 "reconnect_delay_sec": 0, 00:15:28.819 "fast_io_fail_timeout_sec": 0, 00:15:28.819 "psk": "key0", 00:15:28.819 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:28.819 "hdgst": false, 00:15:28.819 "ddgst": false 00:15:28.819 } 00:15:28.819 }, 00:15:28.819 { 00:15:28.819 "method": "bdev_nvme_set_hotplug", 00:15:28.819 "params": { 00:15:28.819 "period_us": 100000, 00:15:28.819 "enable": false 00:15:28.819 } 00:15:28.819 }, 00:15:28.819 { 00:15:28.819 "method": "bdev_enable_histogram", 00:15:28.819 "params": { 00:15:28.819 "name": "nvme0n1", 00:15:28.819 "enable": true 00:15:28.819 } 00:15:28.819 }, 00:15:28.819 { 00:15:28.819 "method": "bdev_wait_for_examine" 00:15:28.819 } 00:15:28.819 ] 00:15:28.819 }, 00:15:28.819 { 00:15:28.819 "subsystem": "nbd", 00:15:28.819 "config": [] 00:15:28.819 } 00:15:28.819 ] 00:15:28.819 }' 00:15:28.819 [2024-11-19 12:35:33.922230] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:28.819 [2024-11-19 12:35:33.922479] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85281 ] 00:15:28.819 [2024-11-19 12:35:34.055486] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.078 [2024-11-19 12:35:34.092053] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:29.079 [2024-11-19 12:35:34.201220] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:29.079 [2024-11-19 12:35:34.229858] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:29.646 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:29.646 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:29.646 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:29.646 12:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:15:29.910 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.910 12:35:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:30.171 Running I/O for 1 seconds... 00:15:31.107 4259.00 IOPS, 16.64 MiB/s 00:15:31.107 Latency(us) 00:15:31.107 [2024-11-19T12:35:36.367Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:31.107 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:31.107 Verification LBA range: start 0x0 length 0x2000 00:15:31.107 nvme0n1 : 1.02 4302.50 16.81 0.00 0.00 29375.14 3470.43 19422.49 00:15:31.107 [2024-11-19T12:35:36.367Z] =================================================================================================================== 00:15:31.107 [2024-11-19T12:35:36.367Z] Total : 4302.50 16.81 0.00 0.00 29375.14 3470.43 19422.49 00:15:31.107 { 00:15:31.107 "results": [ 00:15:31.107 { 00:15:31.107 "job": "nvme0n1", 00:15:31.107 "core_mask": "0x2", 00:15:31.107 "workload": "verify", 00:15:31.107 "status": "finished", 00:15:31.107 "verify_range": { 00:15:31.107 "start": 0, 00:15:31.107 "length": 8192 00:15:31.107 }, 00:15:31.107 "queue_depth": 128, 00:15:31.107 "io_size": 4096, 00:15:31.107 "runtime": 1.019639, 00:15:31.107 "iops": 4302.5031408174855, 00:15:31.108 "mibps": 16.806652893818303, 00:15:31.108 "io_failed": 0, 00:15:31.108 "io_timeout": 0, 00:15:31.108 "avg_latency_us": 29375.143412976355, 00:15:31.108 "min_latency_us": 3470.429090909091, 00:15:31.108 "max_latency_us": 19422.487272727274 00:15:31.108 } 00:15:31.108 ], 00:15:31.108 "core_count": 1 00:15:31.108 } 00:15:31.108 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:15:31.108 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:15:31.108 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:15:31.108 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:15:31.108 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:15:31.108 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:15:31.108 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:31.108 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:15:31.108 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:15:31.108 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:15:31.108 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:31.108 nvmf_trace.0 00:15:31.367 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:15:31.367 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 85281 00:15:31.367 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 85281 ']' 00:15:31.367 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 85281 00:15:31.367 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:31.367 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:31.367 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85281 00:15:31.367 killing process with pid 85281 00:15:31.367 Received shutdown signal, test time was about 1.000000 seconds 00:15:31.367 00:15:31.367 Latency(us) 00:15:31.367 [2024-11-19T12:35:36.627Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:31.367 [2024-11-19T12:35:36.627Z] =================================================================================================================== 00:15:31.367 [2024-11-19T12:35:36.627Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:31.367 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:31.367 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:31.367 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85281' 00:15:31.367 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 85281 00:15:31.367 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 85281 00:15:31.367 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:15:31.367 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # nvmfcleanup 00:15:31.367 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:15:31.627 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:31.627 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:15:31.627 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:31.627 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:31.627 rmmod nvme_tcp 00:15:31.627 rmmod nvme_fabrics 00:15:31.627 rmmod nvme_keyring 00:15:31.627 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:31.627 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:15:31.627 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:15:31.627 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@513 -- # '[' -n 85253 ']' 00:15:31.627 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # killprocess 85253 00:15:31.627 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 85253 ']' 00:15:31.627 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 85253 00:15:31.627 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:31.627 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:31.627 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85253 00:15:31.627 killing process with pid 85253 00:15:31.627 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:31.627 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:31.627 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85253' 00:15:31.627 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 85253 00:15:31.627 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 85253 00:15:31.627 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:15:31.627 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:15:31.627 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:15:31.627 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:15:31.627 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-save 00:15:31.627 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:15:31.627 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-restore 00:15:31.627 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:31.627 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:31.627 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:31.886 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:31.886 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:31.886 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:31.886 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:31.886 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:31.886 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:31.886 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:31.886 12:35:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:31.886 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:31.886 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:31.886 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:31.886 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:31.886 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:31.886 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.886 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:31.886 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.886 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:15:31.886 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.sXoIbfb440 /tmp/tmp.chEALPnrGN /tmp/tmp.ZwyV5UGRwu 00:15:31.886 ************************************ 00:15:31.886 END TEST nvmf_tls 00:15:31.886 ************************************ 00:15:31.886 00:15:31.886 real 1m21.707s 00:15:31.886 user 2m13.409s 00:15:31.886 sys 0m25.993s 00:15:31.886 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:31.886 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:32.147 ************************************ 00:15:32.147 START TEST nvmf_fips 00:15:32.147 ************************************ 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:32.147 * Looking for test storage... 00:15:32.147 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:32.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.147 --rc genhtml_branch_coverage=1 00:15:32.147 --rc genhtml_function_coverage=1 00:15:32.147 --rc genhtml_legend=1 00:15:32.147 --rc geninfo_all_blocks=1 00:15:32.147 --rc geninfo_unexecuted_blocks=1 00:15:32.147 00:15:32.147 ' 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:32.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.147 --rc genhtml_branch_coverage=1 00:15:32.147 --rc genhtml_function_coverage=1 00:15:32.147 --rc genhtml_legend=1 00:15:32.147 --rc geninfo_all_blocks=1 00:15:32.147 --rc geninfo_unexecuted_blocks=1 00:15:32.147 00:15:32.147 ' 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:32.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.147 --rc genhtml_branch_coverage=1 00:15:32.147 --rc genhtml_function_coverage=1 00:15:32.147 --rc genhtml_legend=1 00:15:32.147 --rc geninfo_all_blocks=1 00:15:32.147 --rc geninfo_unexecuted_blocks=1 00:15:32.147 00:15:32.147 ' 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:32.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.147 --rc genhtml_branch_coverage=1 00:15:32.147 --rc genhtml_function_coverage=1 00:15:32.147 --rc genhtml_legend=1 00:15:32.147 --rc geninfo_all_blocks=1 00:15:32.147 --rc geninfo_unexecuted_blocks=1 00:15:32.147 00:15:32.147 ' 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.147 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:15:32.148 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:32.148 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:32.148 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:32.148 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:32.148 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:32.148 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:32.148 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:32.148 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:32.148 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:32.148 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:32.148 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:32.148 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:15:32.148 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:15:32.148 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:15:32.148 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:15:32.148 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:15:32.148 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:15:32.148 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:32.148 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:32.148 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:15:32.148 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:15:32.148 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:15:32.148 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:15:32.148 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:15:32.148 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:15:32.148 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:15:32.148 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:32.148 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:15:32.148 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:15:32.148 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:32.148 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:32.148 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:15:32.148 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:15:32.148 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:32.148 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:15:32.148 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:15:32.148 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:15:32.148 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:15:32.148 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:32.148 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:15:32.148 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:15:32.148 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:32.148 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:32.148 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:15:32.148 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:32.148 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:15:32.148 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:15:32.148 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:32.148 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:15:32.408 Error setting digest 00:15:32.408 4062EE87047F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:15:32.408 4062EE87047F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # prepare_net_devs 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@434 -- # local -g is_hw=no 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # remove_spdk_ns 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@456 -- # nvmf_veth_init 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:32.408 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:32.409 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:32.409 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:32.409 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:32.409 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:32.409 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:32.409 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:32.409 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:32.409 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:32.409 Cannot find device "nvmf_init_br" 00:15:32.409 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:15:32.409 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:32.409 Cannot find device "nvmf_init_br2" 00:15:32.409 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:15:32.409 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:32.409 Cannot find device "nvmf_tgt_br" 00:15:32.409 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:15:32.409 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:32.409 Cannot find device "nvmf_tgt_br2" 00:15:32.409 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:15:32.409 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:32.409 Cannot find device "nvmf_init_br" 00:15:32.409 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:15:32.409 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:32.409 Cannot find device "nvmf_init_br2" 00:15:32.409 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:15:32.409 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:32.409 Cannot find device "nvmf_tgt_br" 00:15:32.409 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:15:32.409 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:32.409 Cannot find device "nvmf_tgt_br2" 00:15:32.409 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:15:32.409 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:32.409 Cannot find device "nvmf_br" 00:15:32.409 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:15:32.409 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:32.409 Cannot find device "nvmf_init_if" 00:15:32.409 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:15:32.409 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:32.669 Cannot find device "nvmf_init_if2" 00:15:32.669 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:15:32.669 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:32.669 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:32.669 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:15:32.669 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:32.669 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:32.669 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:15:32.669 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:32.669 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:32.669 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:32.669 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:32.669 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:32.669 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:32.669 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:32.669 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:32.669 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:32.669 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:32.669 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:32.669 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:32.669 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:32.669 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:32.669 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:32.669 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:32.669 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:32.669 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:32.669 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:32.669 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:32.669 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:32.669 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:32.669 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:32.669 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:32.669 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:32.669 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:32.669 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:32.669 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:32.669 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:32.669 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:32.669 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:32.669 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:32.669 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:32.929 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:32.929 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:15:32.929 00:15:32.929 --- 10.0.0.3 ping statistics --- 00:15:32.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.929 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:15:32.929 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:32.929 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:32.929 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:15:32.929 00:15:32.929 --- 10.0.0.4 ping statistics --- 00:15:32.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.929 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:15:32.929 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:32.929 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:32.929 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:15:32.929 00:15:32.929 --- 10.0.0.1 ping statistics --- 00:15:32.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.929 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:15:32.929 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:32.929 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:32.929 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:15:32.929 00:15:32.929 --- 10.0.0.2 ping statistics --- 00:15:32.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.929 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:15:32.929 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:32.929 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@457 -- # return 0 00:15:32.929 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:15:32.929 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:32.929 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:15:32.929 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:15:32.929 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:32.929 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:15:32.929 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:15:32.929 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:15:32.929 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:32.929 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:32.929 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:32.929 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # nvmfpid=85595 00:15:32.929 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:32.929 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # waitforlisten 85595 00:15:32.929 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 85595 ']' 00:15:32.929 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.929 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:32.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.929 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.929 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:32.929 12:35:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:32.929 [2024-11-19 12:35:38.060791] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:32.929 [2024-11-19 12:35:38.061111] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:33.188 [2024-11-19 12:35:38.203723] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.188 [2024-11-19 12:35:38.244004] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:33.188 [2024-11-19 12:35:38.244069] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:33.188 [2024-11-19 12:35:38.244085] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:33.188 [2024-11-19 12:35:38.244095] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:33.188 [2024-11-19 12:35:38.244104] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:33.188 [2024-11-19 12:35:38.244136] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:33.188 [2024-11-19 12:35:38.277548] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:33.188 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:33.188 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:15:33.188 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:33.188 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:33.188 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:33.188 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:33.188 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:15:33.188 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:33.188 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:15:33.188 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.OQ4 00:15:33.188 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:33.189 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.OQ4 00:15:33.189 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.OQ4 00:15:33.189 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.OQ4 00:15:33.189 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:33.448 [2024-11-19 12:35:38.667778] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:33.448 [2024-11-19 12:35:38.683738] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:33.448 [2024-11-19 12:35:38.683927] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:33.745 malloc0 00:15:33.745 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:33.745 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=85629 00:15:33.745 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:33.745 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 85629 /var/tmp/bdevperf.sock 00:15:33.745 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 85629 ']' 00:15:33.745 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:33.745 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:33.745 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:33.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:33.745 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:33.745 12:35:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:33.745 [2024-11-19 12:35:38.841346] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:33.745 [2024-11-19 12:35:38.841438] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85629 ] 00:15:33.745 [2024-11-19 12:35:38.981313] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.055 [2024-11-19 12:35:39.017622] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:34.055 [2024-11-19 12:35:39.046574] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:34.055 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:34.055 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:15:34.055 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.OQ4 00:15:34.313 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:34.572 [2024-11-19 12:35:39.614334] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:34.572 TLSTESTn1 00:15:34.572 12:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:34.572 Running I/O for 10 seconds... 00:15:36.885 4063.00 IOPS, 15.87 MiB/s [2024-11-19T12:35:43.080Z] 4018.00 IOPS, 15.70 MiB/s [2024-11-19T12:35:44.015Z] 3982.33 IOPS, 15.56 MiB/s [2024-11-19T12:35:44.950Z] 3977.00 IOPS, 15.54 MiB/s [2024-11-19T12:35:45.886Z] 3982.60 IOPS, 15.56 MiB/s [2024-11-19T12:35:46.822Z] 3986.00 IOPS, 15.57 MiB/s [2024-11-19T12:35:48.197Z] 3993.00 IOPS, 15.60 MiB/s [2024-11-19T12:35:49.131Z] 3992.38 IOPS, 15.60 MiB/s [2024-11-19T12:35:50.068Z] 3992.67 IOPS, 15.60 MiB/s [2024-11-19T12:35:50.068Z] 3991.20 IOPS, 15.59 MiB/s 00:15:44.808 Latency(us) 00:15:44.808 [2024-11-19T12:35:50.068Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:44.808 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:44.808 Verification LBA range: start 0x0 length 0x2000 00:15:44.808 TLSTESTn1 : 10.02 3997.28 15.61 0.00 0.00 31963.64 5838.66 23116.33 00:15:44.808 [2024-11-19T12:35:50.068Z] =================================================================================================================== 00:15:44.808 [2024-11-19T12:35:50.068Z] Total : 3997.28 15.61 0.00 0.00 31963.64 5838.66 23116.33 00:15:44.808 { 00:15:44.808 "results": [ 00:15:44.808 { 00:15:44.808 "job": "TLSTESTn1", 00:15:44.808 "core_mask": "0x4", 00:15:44.808 "workload": "verify", 00:15:44.808 "status": "finished", 00:15:44.808 "verify_range": { 00:15:44.808 "start": 0, 00:15:44.808 "length": 8192 00:15:44.808 }, 00:15:44.808 "queue_depth": 128, 00:15:44.808 "io_size": 4096, 00:15:44.808 "runtime": 10.016565, 00:15:44.808 "iops": 3997.2785081512475, 00:15:44.808 "mibps": 15.61436917246581, 00:15:44.808 "io_failed": 0, 00:15:44.808 "io_timeout": 0, 00:15:44.808 "avg_latency_us": 31963.63874531423, 00:15:44.808 "min_latency_us": 5838.6618181818185, 00:15:44.808 "max_latency_us": 23116.334545454545 00:15:44.808 } 00:15:44.808 ], 00:15:44.808 "core_count": 1 00:15:44.808 } 00:15:44.808 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:15:44.809 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:15:44.809 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:15:44.809 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:15:44.809 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:15:44.809 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:44.809 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:15:44.809 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:15:44.809 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:15:44.809 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:44.809 nvmf_trace.0 00:15:44.809 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:15:44.809 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 85629 00:15:44.809 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 85629 ']' 00:15:44.809 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 85629 00:15:44.809 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:15:44.809 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:44.809 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85629 00:15:44.809 killing process with pid 85629 00:15:44.809 Received shutdown signal, test time was about 10.000000 seconds 00:15:44.809 00:15:44.809 Latency(us) 00:15:44.809 [2024-11-19T12:35:50.069Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:44.809 [2024-11-19T12:35:50.069Z] =================================================================================================================== 00:15:44.809 [2024-11-19T12:35:50.069Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:44.809 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:44.809 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:44.809 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85629' 00:15:44.809 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 85629 00:15:44.809 12:35:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 85629 00:15:45.068 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:15:45.068 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # nvmfcleanup 00:15:45.068 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:15:45.068 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:45.068 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:15:45.068 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:45.068 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:45.068 rmmod nvme_tcp 00:15:45.068 rmmod nvme_fabrics 00:15:45.068 rmmod nvme_keyring 00:15:45.068 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:45.068 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:15:45.068 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:15:45.068 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@513 -- # '[' -n 85595 ']' 00:15:45.068 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # killprocess 85595 00:15:45.068 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 85595 ']' 00:15:45.068 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 85595 00:15:45.068 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:15:45.068 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:45.068 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85595 00:15:45.068 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:45.068 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:45.068 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85595' 00:15:45.068 killing process with pid 85595 00:15:45.068 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 85595 00:15:45.068 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 85595 00:15:45.327 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:15:45.327 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:15:45.327 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:15:45.327 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:15:45.327 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-save 00:15:45.327 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:15:45.327 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-restore 00:15:45.327 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:45.327 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:45.327 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:45.327 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:45.327 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:45.327 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:45.327 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:45.327 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:45.327 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:45.327 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:45.327 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:45.327 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:45.586 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:45.586 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:45.586 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:45.586 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:45.586 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.586 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:45.586 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.586 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:15:45.586 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.OQ4 00:15:45.586 ************************************ 00:15:45.586 END TEST nvmf_fips 00:15:45.586 ************************************ 00:15:45.586 00:15:45.586 real 0m13.546s 00:15:45.586 user 0m18.373s 00:15:45.586 sys 0m5.670s 00:15:45.586 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:45.586 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:45.586 12:35:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:15:45.586 12:35:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:45.586 12:35:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:45.586 12:35:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:45.586 ************************************ 00:15:45.586 START TEST nvmf_control_msg_list 00:15:45.586 ************************************ 00:15:45.586 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:15:45.586 * Looking for test storage... 00:15:45.586 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:45.586 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:45.586 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:45.586 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:15:45.846 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:45.846 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:45.846 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:45.846 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:45.846 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:15:45.846 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:15:45.846 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:15:45.846 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:15:45.846 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:15:45.846 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:15:45.846 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:15:45.846 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:45.846 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:15:45.846 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:15:45.846 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:45.846 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:45.846 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:15:45.846 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:15:45.846 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:45.846 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:15:45.846 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:15:45.846 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:15:45.846 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:15:45.846 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:45.846 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:15:45.846 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:15:45.846 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:45.846 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:45.846 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:15:45.846 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:45.846 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:45.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.846 --rc genhtml_branch_coverage=1 00:15:45.846 --rc genhtml_function_coverage=1 00:15:45.846 --rc genhtml_legend=1 00:15:45.846 --rc geninfo_all_blocks=1 00:15:45.846 --rc geninfo_unexecuted_blocks=1 00:15:45.846 00:15:45.846 ' 00:15:45.846 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:45.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.846 --rc genhtml_branch_coverage=1 00:15:45.846 --rc genhtml_function_coverage=1 00:15:45.846 --rc genhtml_legend=1 00:15:45.846 --rc geninfo_all_blocks=1 00:15:45.846 --rc geninfo_unexecuted_blocks=1 00:15:45.846 00:15:45.846 ' 00:15:45.846 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:45.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.846 --rc genhtml_branch_coverage=1 00:15:45.846 --rc genhtml_function_coverage=1 00:15:45.846 --rc genhtml_legend=1 00:15:45.846 --rc geninfo_all_blocks=1 00:15:45.846 --rc geninfo_unexecuted_blocks=1 00:15:45.846 00:15:45.846 ' 00:15:45.846 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:45.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:45.846 --rc genhtml_branch_coverage=1 00:15:45.846 --rc genhtml_function_coverage=1 00:15:45.847 --rc genhtml_legend=1 00:15:45.847 --rc geninfo_all_blocks=1 00:15:45.847 --rc geninfo_unexecuted_blocks=1 00:15:45.847 00:15:45.847 ' 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:45.847 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # prepare_net_devs 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@434 -- # local -g is_hw=no 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # remove_spdk_ns 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@456 -- # nvmf_veth_init 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:45.847 Cannot find device "nvmf_init_br" 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:45.847 Cannot find device "nvmf_init_br2" 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:45.847 Cannot find device "nvmf_tgt_br" 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:45.847 Cannot find device "nvmf_tgt_br2" 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:15:45.847 12:35:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:45.847 Cannot find device "nvmf_init_br" 00:15:45.847 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:15:45.847 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:45.847 Cannot find device "nvmf_init_br2" 00:15:45.847 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:15:45.847 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:45.847 Cannot find device "nvmf_tgt_br" 00:15:45.847 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:15:45.848 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:45.848 Cannot find device "nvmf_tgt_br2" 00:15:45.848 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:15:45.848 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:45.848 Cannot find device "nvmf_br" 00:15:45.848 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:15:45.848 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:45.848 Cannot find device "nvmf_init_if" 00:15:45.848 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:15:45.848 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:45.848 Cannot find device "nvmf_init_if2" 00:15:45.848 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:15:45.848 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:45.848 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:45.848 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:15:45.848 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:45.848 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:45.848 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:15:45.848 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:45.848 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:45.848 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:46.107 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:46.107 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:46.107 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:46.107 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:46.107 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:46.107 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:46.107 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:46.107 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:46.107 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:46.107 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:46.107 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:46.107 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:46.107 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:46.107 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:46.107 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:46.107 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:46.107 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:46.107 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:46.107 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:46.107 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:46.107 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:46.107 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:46.107 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:46.107 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:46.107 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:46.107 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:46.107 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:46.107 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:46.107 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:46.107 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:46.107 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:46.107 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:15:46.107 00:15:46.107 --- 10.0.0.3 ping statistics --- 00:15:46.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.107 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:15:46.107 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:46.107 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:46.107 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:15:46.107 00:15:46.107 --- 10.0.0.4 ping statistics --- 00:15:46.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.107 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:15:46.107 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:46.107 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:46.107 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:15:46.107 00:15:46.107 --- 10.0.0.1 ping statistics --- 00:15:46.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.107 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:15:46.107 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:46.107 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:46.107 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:15:46.107 00:15:46.107 --- 10.0.0.2 ping statistics --- 00:15:46.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.107 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:15:46.107 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:46.107 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@457 -- # return 0 00:15:46.107 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:15:46.107 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:46.107 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:15:46.107 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:15:46.107 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:46.108 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:15:46.108 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:15:46.367 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:15:46.367 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:46.367 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:46.367 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:46.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.367 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # nvmfpid=86002 00:15:46.367 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:46.367 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # waitforlisten 86002 00:15:46.367 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 86002 ']' 00:15:46.367 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.367 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:46.367 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.367 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:46.367 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:46.367 [2024-11-19 12:35:51.444170] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:46.367 [2024-11-19 12:35:51.444453] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:46.367 [2024-11-19 12:35:51.582207] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.367 [2024-11-19 12:35:51.619012] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:46.367 [2024-11-19 12:35:51.619454] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:46.367 [2024-11-19 12:35:51.619596] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:46.367 [2024-11-19 12:35:51.619610] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:46.367 [2024-11-19 12:35:51.619617] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:46.367 [2024-11-19 12:35:51.619649] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.628 [2024-11-19 12:35:51.651336] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:46.628 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:46.628 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:15:46.628 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:46.628 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:46.628 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:46.628 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:46.628 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:15:46.628 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:15:46.628 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:15:46.628 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.628 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:46.628 [2024-11-19 12:35:51.748230] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:46.628 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.628 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:15:46.628 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.628 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:46.628 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.628 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:15:46.628 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.628 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:46.628 Malloc0 00:15:46.628 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.628 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:15:46.628 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.628 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:46.628 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.628 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:46.628 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.628 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:46.628 [2024-11-19 12:35:51.792381] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:46.628 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.628 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=86027 00:15:46.628 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:46.628 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=86028 00:15:46.628 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:46.628 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=86029 00:15:46.628 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:46.628 12:35:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 86027 00:15:46.888 [2024-11-19 12:35:51.971102] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:46.888 [2024-11-19 12:35:51.971649] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:46.888 [2024-11-19 12:35:51.972120] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:47.823 Initializing NVMe Controllers 00:15:47.823 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:47.823 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:15:47.823 Initialization complete. Launching workers. 00:15:47.823 ======================================================== 00:15:47.823 Latency(us) 00:15:47.823 Device Information : IOPS MiB/s Average min max 00:15:47.823 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3474.00 13.57 287.49 169.11 844.12 00:15:47.823 ======================================================== 00:15:47.823 Total : 3474.00 13.57 287.49 169.11 844.12 00:15:47.823 00:15:47.823 Initializing NVMe Controllers 00:15:47.823 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:47.823 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:15:47.823 Initialization complete. Launching workers. 00:15:47.823 ======================================================== 00:15:47.823 Latency(us) 00:15:47.823 Device Information : IOPS MiB/s Average min max 00:15:47.823 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3477.00 13.58 287.18 193.69 781.17 00:15:47.823 ======================================================== 00:15:47.823 Total : 3477.00 13.58 287.18 193.69 781.17 00:15:47.823 00:15:47.824 Initializing NVMe Controllers 00:15:47.824 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:47.824 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:15:47.824 Initialization complete. Launching workers. 00:15:47.824 ======================================================== 00:15:47.824 Latency(us) 00:15:47.824 Device Information : IOPS MiB/s Average min max 00:15:47.824 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3462.00 13.52 288.43 200.59 781.14 00:15:47.824 ======================================================== 00:15:47.824 Total : 3462.00 13.52 288.43 200.59 781.14 00:15:47.824 00:15:47.824 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 86028 00:15:47.824 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 86029 00:15:47.824 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:47.824 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:15:47.824 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # nvmfcleanup 00:15:47.824 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:15:47.824 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:47.824 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:15:47.824 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:47.824 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:47.824 rmmod nvme_tcp 00:15:47.824 rmmod nvme_fabrics 00:15:48.082 rmmod nvme_keyring 00:15:48.082 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:48.082 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:15:48.082 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:15:48.082 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@513 -- # '[' -n 86002 ']' 00:15:48.082 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # killprocess 86002 00:15:48.082 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 86002 ']' 00:15:48.082 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 86002 00:15:48.082 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:15:48.082 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:48.082 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86002 00:15:48.082 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:48.082 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:48.082 killing process with pid 86002 00:15:48.082 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86002' 00:15:48.082 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 86002 00:15:48.082 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 86002 00:15:48.082 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:15:48.082 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:15:48.082 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:15:48.082 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:15:48.082 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:15:48.083 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-save 00:15:48.083 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-restore 00:15:48.083 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:48.083 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:48.083 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:48.083 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:48.342 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:48.342 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:48.342 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:48.342 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:48.342 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:48.342 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:48.342 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:48.342 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:48.342 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:48.342 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:48.342 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:48.342 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:48.342 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.342 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:48.342 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.342 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:15:48.342 00:15:48.342 real 0m2.802s 00:15:48.342 user 0m4.615s 00:15:48.342 sys 0m1.290s 00:15:48.342 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:48.342 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:48.342 ************************************ 00:15:48.342 END TEST nvmf_control_msg_list 00:15:48.342 ************************************ 00:15:48.342 12:35:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:15:48.342 12:35:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:48.342 12:35:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:48.342 12:35:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:48.602 ************************************ 00:15:48.602 START TEST nvmf_wait_for_buf 00:15:48.602 ************************************ 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:15:48.602 * Looking for test storage... 00:15:48.602 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:48.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.602 --rc genhtml_branch_coverage=1 00:15:48.602 --rc genhtml_function_coverage=1 00:15:48.602 --rc genhtml_legend=1 00:15:48.602 --rc geninfo_all_blocks=1 00:15:48.602 --rc geninfo_unexecuted_blocks=1 00:15:48.602 00:15:48.602 ' 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:48.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.602 --rc genhtml_branch_coverage=1 00:15:48.602 --rc genhtml_function_coverage=1 00:15:48.602 --rc genhtml_legend=1 00:15:48.602 --rc geninfo_all_blocks=1 00:15:48.602 --rc geninfo_unexecuted_blocks=1 00:15:48.602 00:15:48.602 ' 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:48.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.602 --rc genhtml_branch_coverage=1 00:15:48.602 --rc genhtml_function_coverage=1 00:15:48.602 --rc genhtml_legend=1 00:15:48.602 --rc geninfo_all_blocks=1 00:15:48.602 --rc geninfo_unexecuted_blocks=1 00:15:48.602 00:15:48.602 ' 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:48.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.602 --rc genhtml_branch_coverage=1 00:15:48.602 --rc genhtml_function_coverage=1 00:15:48.602 --rc genhtml_legend=1 00:15:48.602 --rc geninfo_all_blocks=1 00:15:48.602 --rc geninfo_unexecuted_blocks=1 00:15:48.602 00:15:48.602 ' 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:48.602 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:48.603 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # prepare_net_devs 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@456 -- # nvmf_veth_init 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:48.603 Cannot find device "nvmf_init_br" 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:48.603 Cannot find device "nvmf_init_br2" 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:48.603 Cannot find device "nvmf_tgt_br" 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:48.603 Cannot find device "nvmf_tgt_br2" 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:48.603 Cannot find device "nvmf_init_br" 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:15:48.603 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:48.862 Cannot find device "nvmf_init_br2" 00:15:48.862 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:15:48.862 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:48.862 Cannot find device "nvmf_tgt_br" 00:15:48.862 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:15:48.862 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:48.862 Cannot find device "nvmf_tgt_br2" 00:15:48.862 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:15:48.862 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:48.862 Cannot find device "nvmf_br" 00:15:48.862 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:15:48.862 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:48.863 Cannot find device "nvmf_init_if" 00:15:48.863 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:15:48.863 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:48.863 Cannot find device "nvmf_init_if2" 00:15:48.863 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:15:48.863 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:48.863 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:48.863 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:15:48.863 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:48.863 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:48.863 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:15:48.863 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:48.863 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:48.863 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:48.863 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:48.863 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:48.863 12:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:48.863 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:48.863 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:48.863 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:48.863 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:48.863 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:48.863 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:48.863 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:48.863 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:48.863 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:48.863 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:48.863 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:48.863 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:48.863 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:48.863 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:49.123 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:49.123 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:49.123 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:49.123 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:49.123 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:49.123 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:49.123 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:49.123 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:49.123 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:49.123 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:49.123 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:49.123 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:49.123 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:49.123 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:49.123 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:15:49.123 00:15:49.123 --- 10.0.0.3 ping statistics --- 00:15:49.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.123 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:15:49.123 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:49.123 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:49.123 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:15:49.123 00:15:49.123 --- 10.0.0.4 ping statistics --- 00:15:49.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.123 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:15:49.123 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:49.123 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:49.123 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:15:49.123 00:15:49.123 --- 10.0.0.1 ping statistics --- 00:15:49.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.123 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:15:49.123 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:49.123 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:49.123 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:15:49.123 00:15:49.123 --- 10.0.0.2 ping statistics --- 00:15:49.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.123 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:15:49.123 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:49.123 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@457 -- # return 0 00:15:49.123 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:15:49.123 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:49.123 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:15:49.123 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:15:49.123 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:49.123 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:15:49.123 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:15:49.123 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:15:49.123 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:49.123 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:49.123 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:49.123 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # nvmfpid=86265 00:15:49.124 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:15:49.124 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # waitforlisten 86265 00:15:49.124 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 86265 ']' 00:15:49.124 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.124 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:49.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.124 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.124 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:49.124 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:49.124 [2024-11-19 12:35:54.309420] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:49.124 [2024-11-19 12:35:54.309531] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:49.383 [2024-11-19 12:35:54.452365] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.383 [2024-11-19 12:35:54.490254] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:49.383 [2024-11-19 12:35:54.490324] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:49.383 [2024-11-19 12:35:54.490351] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:49.383 [2024-11-19 12:35:54.490359] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:49.383 [2024-11-19 12:35:54.490366] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:49.383 [2024-11-19 12:35:54.490392] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.383 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:49.383 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:15:49.383 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:49.383 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:49.383 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:49.643 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:49.643 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:15:49.643 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:15:49.643 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:15:49.643 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.643 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:49.643 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.643 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:15:49.643 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.643 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:49.643 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.643 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:15:49.643 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.643 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:49.643 [2024-11-19 12:35:54.704131] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:49.643 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.643 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:15:49.643 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.643 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:49.643 Malloc0 00:15:49.643 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.643 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:15:49.643 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.643 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:49.643 [2024-11-19 12:35:54.746402] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:49.643 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.643 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:15:49.643 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.643 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:49.643 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.643 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:15:49.643 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.643 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:49.643 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.643 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:49.643 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.643 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:49.643 [2024-11-19 12:35:54.770491] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:49.643 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.643 12:35:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:49.902 [2024-11-19 12:35:54.959853] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:51.305 Initializing NVMe Controllers 00:15:51.306 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:51.306 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:15:51.306 Initialization complete. Launching workers. 00:15:51.306 ======================================================== 00:15:51.306 Latency(us) 00:15:51.306 Device Information : IOPS MiB/s Average min max 00:15:51.306 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 499.98 62.50 8000.28 6053.85 15023.73 00:15:51.306 ======================================================== 00:15:51.306 Total : 499.98 62.50 8000.28 6053.85 15023.73 00:15:51.306 00:15:51.306 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:15:51.306 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.306 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:15:51.306 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:51.306 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.306 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4750 00:15:51.306 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4750 -eq 0 ]] 00:15:51.306 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:51.306 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:15:51.306 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # nvmfcleanup 00:15:51.306 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:15:51.306 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:51.306 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:15:51.306 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:51.306 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:51.306 rmmod nvme_tcp 00:15:51.306 rmmod nvme_fabrics 00:15:51.306 rmmod nvme_keyring 00:15:51.306 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:51.306 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:15:51.306 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:15:51.306 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@513 -- # '[' -n 86265 ']' 00:15:51.306 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # killprocess 86265 00:15:51.306 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 86265 ']' 00:15:51.306 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 86265 00:15:51.306 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:15:51.306 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:51.306 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86265 00:15:51.306 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:51.306 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:51.306 killing process with pid 86265 00:15:51.306 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86265' 00:15:51.306 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 86265 00:15:51.306 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 86265 00:15:51.565 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:15:51.565 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:15:51.565 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:15:51.565 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:15:51.565 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-save 00:15:51.565 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:15:51.565 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-restore 00:15:51.565 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:51.565 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:51.565 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:51.565 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:51.565 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:51.565 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:51.565 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:51.565 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:51.565 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:51.565 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:51.565 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:51.565 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:51.565 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:51.565 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:51.565 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:51.824 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:51.824 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.824 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:51.824 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.824 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:15:51.824 00:15:51.824 real 0m3.257s 00:15:51.824 user 0m2.638s 00:15:51.824 sys 0m0.772s 00:15:51.824 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:51.824 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:51.824 ************************************ 00:15:51.824 END TEST nvmf_wait_for_buf 00:15:51.824 ************************************ 00:15:51.824 12:35:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:15:51.824 12:35:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:15:51.824 12:35:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:51.824 12:35:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:51.824 12:35:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:51.824 ************************************ 00:15:51.824 START TEST nvmf_fuzz 00:15:51.824 ************************************ 00:15:51.824 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:15:51.824 * Looking for test storage... 00:15:51.824 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:51.824 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:51.824 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:15:51.824 12:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:52.085 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:52.085 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:52.085 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:52.085 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:52.085 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:52.085 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:52.085 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:52.085 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:52.085 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:52.085 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:52.085 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:52.085 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:52.085 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:52.085 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:15:52.085 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:52.085 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:52.085 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:52.085 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:52.085 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:52.085 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:52.085 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:52.085 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:52.085 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:52.085 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:52.085 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:52.085 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:52.085 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:52.085 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:52.085 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:15:52.085 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:52.085 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:52.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.085 --rc genhtml_branch_coverage=1 00:15:52.085 --rc genhtml_function_coverage=1 00:15:52.085 --rc genhtml_legend=1 00:15:52.085 --rc geninfo_all_blocks=1 00:15:52.085 --rc geninfo_unexecuted_blocks=1 00:15:52.085 00:15:52.085 ' 00:15:52.085 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:52.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.085 --rc genhtml_branch_coverage=1 00:15:52.085 --rc genhtml_function_coverage=1 00:15:52.085 --rc genhtml_legend=1 00:15:52.085 --rc geninfo_all_blocks=1 00:15:52.085 --rc geninfo_unexecuted_blocks=1 00:15:52.085 00:15:52.085 ' 00:15:52.085 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:52.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.085 --rc genhtml_branch_coverage=1 00:15:52.085 --rc genhtml_function_coverage=1 00:15:52.085 --rc genhtml_legend=1 00:15:52.085 --rc geninfo_all_blocks=1 00:15:52.085 --rc geninfo_unexecuted_blocks=1 00:15:52.085 00:15:52.085 ' 00:15:52.085 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:52.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.085 --rc genhtml_branch_coverage=1 00:15:52.085 --rc genhtml_function_coverage=1 00:15:52.085 --rc genhtml_legend=1 00:15:52.085 --rc geninfo_all_blocks=1 00:15:52.085 --rc geninfo_unexecuted_blocks=1 00:15:52.085 00:15:52.085 ' 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:52.086 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@472 -- # prepare_net_devs 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@434 -- # local -g is_hw=no 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@436 -- # remove_spdk_ns 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@456 -- # nvmf_veth_init 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:52.086 Cannot find device "nvmf_init_br" 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:52.086 Cannot find device "nvmf_init_br2" 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:52.086 Cannot find device "nvmf_tgt_br" 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # true 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:52.086 Cannot find device "nvmf_tgt_br2" 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # true 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:52.086 Cannot find device "nvmf_init_br" 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # true 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:52.086 Cannot find device "nvmf_init_br2" 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # true 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:52.086 Cannot find device "nvmf_tgt_br" 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # true 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:52.086 Cannot find device "nvmf_tgt_br2" 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # true 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:52.086 Cannot find device "nvmf_br" 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # true 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:52.086 Cannot find device "nvmf_init_if" 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # true 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:52.086 Cannot find device "nvmf_init_if2" 00:15:52.086 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # true 00:15:52.087 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:52.087 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:52.087 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # true 00:15:52.087 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:52.087 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:52.087 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # true 00:15:52.087 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:52.087 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:52.087 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:52.087 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:52.087 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:52.087 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:52.346 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:52.346 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:52.346 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:52.346 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:52.346 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:52.346 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:52.346 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:52.346 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:52.346 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:52.346 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:52.346 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:52.346 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:52.346 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:52.346 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:52.346 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:52.346 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:52.346 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:52.346 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:52.346 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:52.346 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:52.346 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:52.347 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:52.347 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:52.347 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:52.347 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:52.347 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:52.347 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:52.347 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:52.347 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:15:52.347 00:15:52.347 --- 10.0.0.3 ping statistics --- 00:15:52.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.347 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:15:52.347 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:52.347 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:52.347 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:15:52.347 00:15:52.347 --- 10.0.0.4 ping statistics --- 00:15:52.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.347 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:15:52.347 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:52.347 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:52.347 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:15:52.347 00:15:52.347 --- 10.0.0.1 ping statistics --- 00:15:52.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.347 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:15:52.347 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:52.347 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:52.347 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:15:52.347 00:15:52.347 --- 10.0.0.2 ping statistics --- 00:15:52.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.347 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:15:52.347 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:52.347 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@457 -- # return 0 00:15:52.347 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:15:52.347 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:52.347 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:15:52.347 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:15:52.347 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:52.347 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:15:52.347 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:15:52.347 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:52.347 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=86522 00:15:52.347 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:52.347 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 86522 00:15:52.347 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 86522 ']' 00:15:52.347 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.347 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:52.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.347 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.347 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:52.347 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:52.915 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:52.915 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:15:52.915 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:52.915 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.915 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:52.915 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.915 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:15:52.915 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.915 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:52.915 Malloc0 00:15:52.915 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.915 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:52.915 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.915 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:52.915 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.915 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:52.915 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.915 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:52.915 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.915 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:52.915 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.915 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:52.915 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.915 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' 00:15:52.915 12:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -N -a 00:15:53.174 Shutting down the fuzz application 00:15:53.174 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:15:53.433 Shutting down the fuzz application 00:15:53.433 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:53.433 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.433 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:53.433 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.433 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:15:53.433 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:15:53.433 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@512 -- # nvmfcleanup 00:15:53.433 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:15:53.433 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:53.433 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:15:53.433 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:53.433 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:53.433 rmmod nvme_tcp 00:15:53.433 rmmod nvme_fabrics 00:15:53.433 rmmod nvme_keyring 00:15:53.433 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:53.433 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:15:53.433 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:15:53.433 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@513 -- # '[' -n 86522 ']' 00:15:53.433 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@514 -- # killprocess 86522 00:15:53.433 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 86522 ']' 00:15:53.433 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 86522 00:15:53.433 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:15:53.433 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:53.433 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86522 00:15:53.433 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:53.433 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:53.433 killing process with pid 86522 00:15:53.433 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86522' 00:15:53.433 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 86522 00:15:53.433 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 86522 00:15:53.693 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:15:53.693 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:15:53.693 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:15:53.693 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:15:53.693 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:15:53.693 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # iptables-save 00:15:53.693 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # iptables-restore 00:15:53.693 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:53.693 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:53.693 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:53.693 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:53.693 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:53.693 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:53.952 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:53.952 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:53.952 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:53.952 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:53.952 12:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:53.952 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:53.952 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:53.952 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:53.952 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:53.952 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:53.952 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.952 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:53.952 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.952 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@300 -- # return 0 00:15:53.952 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:15:53.952 00:15:53.952 real 0m2.229s 00:15:53.952 user 0m1.881s 00:15:53.952 sys 0m0.664s 00:15:53.952 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:53.952 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:53.952 ************************************ 00:15:53.952 END TEST nvmf_fuzz 00:15:53.952 ************************************ 00:15:53.953 12:35:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:15:53.953 12:35:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:53.953 12:35:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:53.953 12:35:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:53.953 ************************************ 00:15:53.953 START TEST nvmf_multiconnection 00:15:53.953 ************************************ 00:15:53.953 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:15:54.213 * Looking for test storage... 00:15:54.213 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:54.213 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:54.213 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lcov --version 00:15:54.213 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:54.213 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:54.213 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:54.213 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:54.213 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:54.213 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:15:54.213 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:15:54.213 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:15:54.213 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:15:54.213 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:15:54.213 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:15:54.213 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:15:54.213 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:54.213 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:15:54.213 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:15:54.213 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:54.213 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:54.213 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:15:54.213 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:15:54.213 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:54.213 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:15:54.213 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:15:54.213 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:15:54.213 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:15:54.213 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:54.213 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:15:54.213 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:15:54.213 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:54.213 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:54.213 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:15:54.213 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:54.213 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:54.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.213 --rc genhtml_branch_coverage=1 00:15:54.213 --rc genhtml_function_coverage=1 00:15:54.213 --rc genhtml_legend=1 00:15:54.213 --rc geninfo_all_blocks=1 00:15:54.213 --rc geninfo_unexecuted_blocks=1 00:15:54.213 00:15:54.213 ' 00:15:54.213 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:54.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.213 --rc genhtml_branch_coverage=1 00:15:54.213 --rc genhtml_function_coverage=1 00:15:54.213 --rc genhtml_legend=1 00:15:54.213 --rc geninfo_all_blocks=1 00:15:54.213 --rc geninfo_unexecuted_blocks=1 00:15:54.213 00:15:54.213 ' 00:15:54.213 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:54.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.213 --rc genhtml_branch_coverage=1 00:15:54.213 --rc genhtml_function_coverage=1 00:15:54.213 --rc genhtml_legend=1 00:15:54.213 --rc geninfo_all_blocks=1 00:15:54.213 --rc geninfo_unexecuted_blocks=1 00:15:54.213 00:15:54.213 ' 00:15:54.213 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:54.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.214 --rc genhtml_branch_coverage=1 00:15:54.214 --rc genhtml_function_coverage=1 00:15:54.214 --rc genhtml_legend=1 00:15:54.214 --rc geninfo_all_blocks=1 00:15:54.214 --rc geninfo_unexecuted_blocks=1 00:15:54.214 00:15:54.214 ' 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:54.214 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@472 -- # prepare_net_devs 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@434 -- # local -g is_hw=no 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@436 -- # remove_spdk_ns 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@456 -- # nvmf_veth_init 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:54.214 Cannot find device "nvmf_init_br" 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:54.214 Cannot find device "nvmf_init_br2" 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:54.214 Cannot find device "nvmf_tgt_br" 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # true 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:54.214 Cannot find device "nvmf_tgt_br2" 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # true 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:54.214 Cannot find device "nvmf_init_br" 00:15:54.214 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # true 00:15:54.215 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:54.215 Cannot find device "nvmf_init_br2" 00:15:54.215 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # true 00:15:54.215 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:54.215 Cannot find device "nvmf_tgt_br" 00:15:54.215 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # true 00:15:54.215 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:54.215 Cannot find device "nvmf_tgt_br2" 00:15:54.215 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # true 00:15:54.215 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:54.474 Cannot find device "nvmf_br" 00:15:54.474 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # true 00:15:54.474 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:54.474 Cannot find device "nvmf_init_if" 00:15:54.474 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # true 00:15:54.474 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:54.474 Cannot find device "nvmf_init_if2" 00:15:54.474 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # true 00:15:54.474 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:54.474 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:54.474 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # true 00:15:54.474 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:54.474 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:54.474 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # true 00:15:54.474 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:54.474 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:54.474 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:54.474 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:54.474 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:54.474 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:54.474 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:54.474 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:54.474 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:54.474 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:54.474 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:54.474 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:54.474 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:54.474 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:54.474 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:54.474 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:54.474 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:54.474 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:54.474 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:54.474 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:54.474 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:54.474 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:54.474 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:54.474 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:54.474 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:54.733 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:54.733 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:54.733 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:54.734 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:54.734 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:54.734 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:54.734 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:54.734 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:54.734 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:54.734 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:15:54.734 00:15:54.734 --- 10.0.0.3 ping statistics --- 00:15:54.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.734 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:15:54.734 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:54.734 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:54.734 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:15:54.734 00:15:54.734 --- 10.0.0.4 ping statistics --- 00:15:54.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.734 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:15:54.734 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:54.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:54.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:15:54.734 00:15:54.734 --- 10.0.0.1 ping statistics --- 00:15:54.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.734 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:15:54.734 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:54.734 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:54.734 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:15:54.734 00:15:54.734 --- 10.0.0.2 ping statistics --- 00:15:54.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.734 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:15:54.734 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:54.734 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@457 -- # return 0 00:15:54.734 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:15:54.734 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:54.734 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:15:54.734 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:15:54.734 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:54.734 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:15:54.734 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:15:54.734 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:15:54.734 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:54.734 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:54.734 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:54.734 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@505 -- # nvmfpid=86756 00:15:54.734 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:54.734 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@506 -- # waitforlisten 86756 00:15:54.734 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 86756 ']' 00:15:54.734 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.734 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:54.734 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.734 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:54.734 12:35:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:54.734 [2024-11-19 12:35:59.872851] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:54.734 [2024-11-19 12:35:59.872949] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:54.993 [2024-11-19 12:36:00.014624] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:54.993 [2024-11-19 12:36:00.054790] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:54.993 [2024-11-19 12:36:00.054855] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:54.993 [2024-11-19 12:36:00.054866] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:54.993 [2024-11-19 12:36:00.054875] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:54.993 [2024-11-19 12:36:00.054882] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:54.993 [2024-11-19 12:36:00.055047] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:54.993 [2024-11-19 12:36:00.055195] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:54.993 [2024-11-19 12:36:00.055264] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:15:54.993 [2024-11-19 12:36:00.055266] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.993 [2024-11-19 12:36:00.086177] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:54.993 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:54.993 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:15:54.993 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:54.993 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:54.993 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:54.993 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:54.993 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:54.993 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.993 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:54.993 [2024-11-19 12:36:00.192643] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:54.993 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.993 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:15:54.993 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:54.993 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:54.993 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.993 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:54.993 Malloc1 00:15:54.993 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.993 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:15:54.993 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.993 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:54.993 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.993 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:54.993 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.993 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:54.993 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.993 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:54.993 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.993 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:54.993 [2024-11-19 12:36:00.247993] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:55.253 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.253 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:55.253 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:15:55.253 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.253 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:55.253 Malloc2 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:55.254 Malloc3 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:55.254 Malloc4 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:55.254 Malloc5 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.3 -s 4420 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:55.254 Malloc6 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.3 -s 4420 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:55.254 Malloc7 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.254 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.3 -s 4420 00:15:55.255 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.255 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:55.515 Malloc8 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.3 -s 4420 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:55.515 Malloc9 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.3 -s 4420 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:55.515 Malloc10 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.3 -s 4420 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:55.515 Malloc11 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:55.515 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.516 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.3 -s 4420 00:15:55.516 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.516 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:55.516 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.516 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:15:55.516 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:55.516 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid=bae1b18f-cc14-461e-aa63-e888be1a2cc9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:15:55.775 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:15:55.775 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:15:55.775 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:55.775 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:55.775 12:36:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:15:57.682 12:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:57.682 12:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:57.682 12:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:15:57.682 12:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:57.682 12:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:57.682 12:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:15:57.682 12:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:57.682 12:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid=bae1b18f-cc14-461e-aa63-e888be1a2cc9 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.3 -s 4420 00:15:57.942 12:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:15:57.942 12:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:15:57.942 12:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:57.942 12:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:57.942 12:36:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:15:59.849 12:36:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:59.849 12:36:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:59.849 12:36:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:15:59.849 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:59.849 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:59.849 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:15:59.849 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:59.849 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid=bae1b18f-cc14-461e-aa63-e888be1a2cc9 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.3 -s 4420 00:16:00.108 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:16:00.108 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:00.108 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:00.108 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:00.108 12:36:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:02.013 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:02.013 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:02.013 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:16:02.013 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:02.013 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:02.013 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:02.013 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:02.013 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid=bae1b18f-cc14-461e-aa63-e888be1a2cc9 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.3 -s 4420 00:16:02.271 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:16:02.271 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:02.271 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:02.271 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:02.271 12:36:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:04.174 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:04.174 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:04.174 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:16:04.174 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:04.174 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:04.174 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:04.174 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:04.174 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid=bae1b18f-cc14-461e-aa63-e888be1a2cc9 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.3 -s 4420 00:16:04.432 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:16:04.432 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:04.432 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:04.432 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:04.432 12:36:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:06.350 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:06.350 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:06.350 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:16:06.350 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:06.350 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:06.350 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:06.350 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:06.350 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid=bae1b18f-cc14-461e-aa63-e888be1a2cc9 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.3 -s 4420 00:16:06.613 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:16:06.613 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:06.613 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:06.613 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:06.613 12:36:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:08.516 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:08.516 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:08.516 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:16:08.516 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:08.516 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:08.516 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:08.516 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:08.516 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid=bae1b18f-cc14-461e-aa63-e888be1a2cc9 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.3 -s 4420 00:16:08.775 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:16:08.775 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:08.775 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:08.775 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:08.775 12:36:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:10.680 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:10.680 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:10.680 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:16:10.680 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:10.680 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:10.680 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:10.680 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:10.680 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid=bae1b18f-cc14-461e-aa63-e888be1a2cc9 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.3 -s 4420 00:16:10.939 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:16:10.939 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:10.939 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:10.939 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:10.939 12:36:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:12.845 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:12.845 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:12.845 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:16:12.845 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:12.845 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:12.845 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:12.845 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:12.845 12:36:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid=bae1b18f-cc14-461e-aa63-e888be1a2cc9 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.3 -s 4420 00:16:13.104 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:16:13.104 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:13.104 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:13.104 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:13.104 12:36:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:15.008 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:15.008 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:15.009 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:16:15.009 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:15.009 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:15.009 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:15.009 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:15.009 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid=bae1b18f-cc14-461e-aa63-e888be1a2cc9 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.3 -s 4420 00:16:15.268 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:16:15.268 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:15.268 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:15.268 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:15.268 12:36:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:17.172 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:17.172 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:17.172 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:16:17.172 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:17.172 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:17.172 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:17.173 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:17.173 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid=bae1b18f-cc14-461e-aa63-e888be1a2cc9 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.3 -s 4420 00:16:17.431 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:16:17.431 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:17.431 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:17.431 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:17.431 12:36:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:19.335 12:36:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:19.335 12:36:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:19.335 12:36:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:16:19.335 12:36:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:19.335 12:36:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:19.335 12:36:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:19.335 12:36:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:16:19.335 [global] 00:16:19.335 thread=1 00:16:19.335 invalidate=1 00:16:19.335 rw=read 00:16:19.335 time_based=1 00:16:19.335 runtime=10 00:16:19.335 ioengine=libaio 00:16:19.335 direct=1 00:16:19.335 bs=262144 00:16:19.335 iodepth=64 00:16:19.335 norandommap=1 00:16:19.335 numjobs=1 00:16:19.335 00:16:19.335 [job0] 00:16:19.335 filename=/dev/nvme0n1 00:16:19.335 [job1] 00:16:19.335 filename=/dev/nvme10n1 00:16:19.335 [job2] 00:16:19.335 filename=/dev/nvme1n1 00:16:19.335 [job3] 00:16:19.335 filename=/dev/nvme2n1 00:16:19.335 [job4] 00:16:19.335 filename=/dev/nvme3n1 00:16:19.335 [job5] 00:16:19.335 filename=/dev/nvme4n1 00:16:19.335 [job6] 00:16:19.335 filename=/dev/nvme5n1 00:16:19.335 [job7] 00:16:19.335 filename=/dev/nvme6n1 00:16:19.335 [job8] 00:16:19.335 filename=/dev/nvme7n1 00:16:19.335 [job9] 00:16:19.335 filename=/dev/nvme8n1 00:16:19.335 [job10] 00:16:19.335 filename=/dev/nvme9n1 00:16:19.594 Could not set queue depth (nvme0n1) 00:16:19.594 Could not set queue depth (nvme10n1) 00:16:19.594 Could not set queue depth (nvme1n1) 00:16:19.594 Could not set queue depth (nvme2n1) 00:16:19.594 Could not set queue depth (nvme3n1) 00:16:19.594 Could not set queue depth (nvme4n1) 00:16:19.594 Could not set queue depth (nvme5n1) 00:16:19.594 Could not set queue depth (nvme6n1) 00:16:19.594 Could not set queue depth (nvme7n1) 00:16:19.594 Could not set queue depth (nvme8n1) 00:16:19.594 Could not set queue depth (nvme9n1) 00:16:19.594 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:19.594 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:19.594 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:19.594 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:19.594 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:19.594 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:19.594 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:19.594 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:19.594 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:19.594 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:19.594 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:19.594 fio-3.35 00:16:19.594 Starting 11 threads 00:16:31.835 00:16:31.835 job0: (groupid=0, jobs=1): err= 0: pid=87210: Tue Nov 19 12:36:35 2024 00:16:31.835 read: IOPS=378, BW=94.7MiB/s (99.3MB/s)(954MiB/10074msec) 00:16:31.835 slat (usec): min=20, max=45916, avg=2617.69, stdev=6055.57 00:16:31.835 clat (msec): min=18, max=227, avg=166.14, stdev=21.06 00:16:31.835 lat (msec): min=19, max=227, avg=168.76, stdev=21.32 00:16:31.835 clat percentiles (msec): 00:16:31.835 | 1.00th=[ 74], 5.00th=[ 140], 10.00th=[ 148], 20.00th=[ 157], 00:16:31.835 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 171], 00:16:31.835 | 70.00th=[ 176], 80.00th=[ 180], 90.00th=[ 188], 95.00th=[ 194], 00:16:31.835 | 99.00th=[ 207], 99.50th=[ 213], 99.90th=[ 228], 99.95th=[ 228], 00:16:31.835 | 99.99th=[ 228] 00:16:31.835 bw ( KiB/s): min=87377, max=102400, per=14.91%, avg=96046.25, stdev=4932.33, samples=20 00:16:31.835 iops : min= 341, max= 400, avg=375.00, stdev=19.21, samples=20 00:16:31.835 lat (msec) : 20=0.05%, 50=0.79%, 100=0.42%, 250=98.74% 00:16:31.835 cpu : usr=0.25%, sys=1.67%, ctx=779, majf=0, minf=4097 00:16:31.835 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:16:31.835 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:31.835 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:31.835 issued rwts: total=3815,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:31.835 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:31.835 job1: (groupid=0, jobs=1): err= 0: pid=87211: Tue Nov 19 12:36:35 2024 00:16:31.835 read: IOPS=377, BW=94.4MiB/s (99.0MB/s)(951MiB/10077msec) 00:16:31.835 slat (usec): min=20, max=48254, avg=2625.63, stdev=6072.21 00:16:31.835 clat (msec): min=16, max=231, avg=166.65, stdev=20.68 00:16:31.835 lat (msec): min=17, max=231, avg=169.28, stdev=20.95 00:16:31.835 clat percentiles (msec): 00:16:31.835 | 1.00th=[ 81], 5.00th=[ 142], 10.00th=[ 148], 20.00th=[ 155], 00:16:31.835 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 171], 00:16:31.835 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 188], 95.00th=[ 197], 00:16:31.835 | 99.00th=[ 209], 99.50th=[ 215], 99.90th=[ 226], 99.95th=[ 228], 00:16:31.835 | 99.99th=[ 232] 00:16:31.835 bw ( KiB/s): min=86016, max=101684, per=14.86%, avg=95759.10, stdev=4782.92, samples=20 00:16:31.835 iops : min= 336, max= 397, avg=374.00, stdev=18.67, samples=20 00:16:31.835 lat (msec) : 20=0.11%, 50=0.16%, 100=1.26%, 250=98.48% 00:16:31.835 cpu : usr=0.12%, sys=1.82%, ctx=792, majf=0, minf=4097 00:16:31.835 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:16:31.835 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:31.835 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:31.835 issued rwts: total=3804,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:31.836 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:31.836 job2: (groupid=0, jobs=1): err= 0: pid=87212: Tue Nov 19 12:36:35 2024 00:16:31.836 read: IOPS=116, BW=29.0MiB/s (30.4MB/s)(294MiB/10130msec) 00:16:31.836 slat (usec): min=20, max=304591, avg=8509.73, stdev=23332.17 00:16:31.836 clat (msec): min=33, max=748, avg=541.98, stdev=80.62 00:16:31.836 lat (msec): min=33, max=772, avg=550.49, stdev=81.16 00:16:31.836 clat percentiles (msec): 00:16:31.836 | 1.00th=[ 199], 5.00th=[ 435], 10.00th=[ 468], 20.00th=[ 506], 00:16:31.836 | 30.00th=[ 518], 40.00th=[ 535], 50.00th=[ 550], 60.00th=[ 567], 00:16:31.836 | 70.00th=[ 575], 80.00th=[ 592], 90.00th=[ 625], 95.00th=[ 651], 00:16:31.836 | 99.00th=[ 684], 99.50th=[ 718], 99.90th=[ 726], 99.95th=[ 751], 00:16:31.836 | 99.99th=[ 751] 00:16:31.836 bw ( KiB/s): min=10260, max=35398, per=4.42%, avg=28472.10, stdev=5727.89, samples=20 00:16:31.836 iops : min= 40, max= 138, avg=111.10, stdev=22.37, samples=20 00:16:31.836 lat (msec) : 50=0.43%, 250=1.11%, 500=16.50%, 750=81.97% 00:16:31.836 cpu : usr=0.13%, sys=0.60%, ctx=230, majf=0, minf=4097 00:16:31.836 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.7%, >=64=94.6% 00:16:31.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:31.836 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:31.836 issued rwts: total=1176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:31.836 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:31.836 job3: (groupid=0, jobs=1): err= 0: pid=87213: Tue Nov 19 12:36:35 2024 00:16:31.836 read: IOPS=191, BW=47.9MiB/s (50.2MB/s)(484MiB/10110msec) 00:16:31.836 slat (usec): min=20, max=78421, avg=5157.99, stdev=12526.25 00:16:31.836 clat (msec): min=17, max=516, avg=328.34, stdev=88.06 00:16:31.836 lat (msec): min=18, max=516, avg=333.50, stdev=89.30 00:16:31.836 clat percentiles (msec): 00:16:31.836 | 1.00th=[ 28], 5.00th=[ 153], 10.00th=[ 178], 20.00th=[ 296], 00:16:31.836 | 30.00th=[ 334], 40.00th=[ 347], 50.00th=[ 355], 60.00th=[ 363], 00:16:31.836 | 70.00th=[ 368], 80.00th=[ 380], 90.00th=[ 401], 95.00th=[ 439], 00:16:31.836 | 99.00th=[ 468], 99.50th=[ 481], 99.90th=[ 498], 99.95th=[ 518], 00:16:31.836 | 99.99th=[ 518] 00:16:31.836 bw ( KiB/s): min=35840, max=88064, per=7.45%, avg=47974.20, stdev=13535.76, samples=20 00:16:31.836 iops : min= 140, max= 344, avg=187.30, stdev=52.81, samples=20 00:16:31.836 lat (msec) : 20=0.15%, 50=1.29%, 100=0.88%, 250=16.57%, 500=81.05% 00:16:31.836 lat (msec) : 750=0.05% 00:16:31.836 cpu : usr=0.16%, sys=0.87%, ctx=389, majf=0, minf=4097 00:16:31.836 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:16:31.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:31.836 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:31.836 issued rwts: total=1937,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:31.836 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:31.836 job4: (groupid=0, jobs=1): err= 0: pid=87214: Tue Nov 19 12:36:35 2024 00:16:31.836 read: IOPS=112, BW=28.0MiB/s (29.4MB/s)(284MiB/10136msec) 00:16:31.836 slat (usec): min=20, max=369345, avg=8434.70, stdev=27527.19 00:16:31.836 clat (msec): min=22, max=959, avg=561.18, stdev=114.38 00:16:31.836 lat (msec): min=23, max=959, avg=569.62, stdev=114.94 00:16:31.836 clat percentiles (msec): 00:16:31.836 | 1.00th=[ 116], 5.00th=[ 397], 10.00th=[ 435], 20.00th=[ 481], 00:16:31.836 | 30.00th=[ 523], 40.00th=[ 542], 50.00th=[ 567], 60.00th=[ 592], 00:16:31.836 | 70.00th=[ 625], 80.00th=[ 651], 90.00th=[ 693], 95.00th=[ 726], 00:16:31.836 | 99.00th=[ 793], 99.50th=[ 793], 99.90th=[ 810], 99.95th=[ 961], 00:16:31.836 | 99.99th=[ 961] 00:16:31.836 bw ( KiB/s): min= 9216, max=36864, per=4.26%, avg=27463.55, stdev=7439.88, samples=20 00:16:31.836 iops : min= 36, max= 144, avg=107.20, stdev=29.08, samples=20 00:16:31.836 lat (msec) : 50=0.09%, 100=0.88%, 250=1.14%, 500=22.69%, 750=72.47% 00:16:31.836 lat (msec) : 1000=2.73% 00:16:31.836 cpu : usr=0.03%, sys=0.60%, ctx=207, majf=0, minf=4097 00:16:31.836 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.5% 00:16:31.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:31.836 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:31.836 issued rwts: total=1137,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:31.836 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:31.836 job5: (groupid=0, jobs=1): err= 0: pid=87215: Tue Nov 19 12:36:35 2024 00:16:31.836 read: IOPS=116, BW=29.1MiB/s (30.5MB/s)(295MiB/10137msec) 00:16:31.836 slat (usec): min=20, max=376160, avg=8511.58, stdev=26275.63 00:16:31.836 clat (msec): min=85, max=780, avg=540.65, stdev=114.51 00:16:31.836 lat (msec): min=165, max=940, avg=549.16, stdev=115.72 00:16:31.836 clat percentiles (msec): 00:16:31.836 | 1.00th=[ 178], 5.00th=[ 262], 10.00th=[ 443], 20.00th=[ 498], 00:16:31.836 | 30.00th=[ 527], 40.00th=[ 542], 50.00th=[ 550], 60.00th=[ 567], 00:16:31.836 | 70.00th=[ 584], 80.00th=[ 609], 90.00th=[ 676], 95.00th=[ 709], 00:16:31.836 | 99.00th=[ 743], 99.50th=[ 776], 99.90th=[ 776], 99.95th=[ 785], 00:16:31.836 | 99.99th=[ 785] 00:16:31.836 bw ( KiB/s): min=16384, max=33280, per=4.43%, avg=28563.55, stdev=4603.40, samples=20 00:16:31.836 iops : min= 64, max= 130, avg=111.50, stdev=17.96, samples=20 00:16:31.836 lat (msec) : 100=0.08%, 250=4.41%, 500=15.93%, 750=78.81%, 1000=0.76% 00:16:31.836 cpu : usr=0.07%, sys=0.61%, ctx=213, majf=0, minf=4097 00:16:31.836 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.7%, >=64=94.7% 00:16:31.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:31.836 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:31.836 issued rwts: total=1180,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:31.836 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:31.836 job6: (groupid=0, jobs=1): err= 0: pid=87216: Tue Nov 19 12:36:35 2024 00:16:31.836 read: IOPS=375, BW=93.9MiB/s (98.5MB/s)(945MiB/10061msec) 00:16:31.836 slat (usec): min=20, max=96981, avg=2640.19, stdev=6351.29 00:16:31.836 clat (msec): min=57, max=469, avg=167.48, stdev=44.61 00:16:31.836 lat (msec): min=57, max=469, avg=170.12, stdev=45.19 00:16:31.836 clat percentiles (msec): 00:16:31.836 | 1.00th=[ 103], 5.00th=[ 142], 10.00th=[ 146], 20.00th=[ 150], 00:16:31.836 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 163], 00:16:31.836 | 70.00th=[ 167], 80.00th=[ 171], 90.00th=[ 182], 95.00th=[ 201], 00:16:31.836 | 99.00th=[ 397], 99.50th=[ 409], 99.90th=[ 426], 99.95th=[ 426], 00:16:31.836 | 99.99th=[ 468] 00:16:31.836 bw ( KiB/s): min=41042, max=105683, per=14.77%, avg=95190.65, stdev=18308.76, samples=20 00:16:31.836 iops : min= 160, max= 412, avg=371.65, stdev=71.49, samples=20 00:16:31.836 lat (msec) : 100=0.90%, 250=95.05%, 500=4.05% 00:16:31.836 cpu : usr=0.26%, sys=1.65%, ctx=815, majf=0, minf=4097 00:16:31.836 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:16:31.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:31.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:31.836 issued rwts: total=3780,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:31.836 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:31.836 job7: (groupid=0, jobs=1): err= 0: pid=87217: Tue Nov 19 12:36:35 2024 00:16:31.836 read: IOPS=373, BW=93.4MiB/s (97.9MB/s)(940MiB/10066msec) 00:16:31.836 slat (usec): min=16, max=208878, avg=2654.70, stdev=7216.89 00:16:31.836 clat (msec): min=16, max=510, avg=168.44, stdev=48.37 00:16:31.836 lat (msec): min=17, max=510, avg=171.09, stdev=49.02 00:16:31.836 clat percentiles (msec): 00:16:31.836 | 1.00th=[ 89], 5.00th=[ 142], 10.00th=[ 146], 20.00th=[ 150], 00:16:31.836 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 163], 00:16:31.836 | 70.00th=[ 167], 80.00th=[ 171], 90.00th=[ 180], 95.00th=[ 247], 00:16:31.836 | 99.00th=[ 426], 99.50th=[ 430], 99.90th=[ 447], 99.95th=[ 510], 00:16:31.836 | 99.99th=[ 510] 00:16:31.836 bw ( KiB/s): min=37376, max=106496, per=14.69%, avg=94622.60, stdev=18485.99, samples=20 00:16:31.836 iops : min= 146, max= 416, avg=369.60, stdev=72.20, samples=20 00:16:31.836 lat (msec) : 20=0.16%, 50=0.13%, 100=0.72%, 250=94.02%, 500=4.92% 00:16:31.836 lat (msec) : 750=0.05% 00:16:31.836 cpu : usr=0.25%, sys=1.70%, ctx=795, majf=0, minf=4098 00:16:31.836 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:16:31.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:31.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:31.836 issued rwts: total=3760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:31.836 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:31.836 job8: (groupid=0, jobs=1): err= 0: pid=87218: Tue Nov 19 12:36:35 2024 00:16:31.836 read: IOPS=116, BW=29.0MiB/s (30.4MB/s)(294MiB/10133msec) 00:16:31.836 slat (usec): min=23, max=278985, avg=8497.36, stdev=23324.79 00:16:31.836 clat (msec): min=84, max=788, avg=541.62, stdev=125.22 00:16:31.836 lat (msec): min=85, max=788, avg=550.12, stdev=125.77 00:16:31.836 clat percentiles (msec): 00:16:31.836 | 1.00th=[ 95], 5.00th=[ 239], 10.00th=[ 430], 20.00th=[ 472], 00:16:31.836 | 30.00th=[ 514], 40.00th=[ 542], 50.00th=[ 567], 60.00th=[ 584], 00:16:31.836 | 70.00th=[ 609], 80.00th=[ 634], 90.00th=[ 667], 95.00th=[ 684], 00:16:31.836 | 99.00th=[ 751], 99.50th=[ 751], 99.90th=[ 768], 99.95th=[ 793], 00:16:31.836 | 99.99th=[ 793] 00:16:31.836 bw ( KiB/s): min=15872, max=34746, per=4.43%, avg=28514.85, stdev=4589.80, samples=20 00:16:31.836 iops : min= 62, max= 135, avg=111.30, stdev=17.85, samples=20 00:16:31.836 lat (msec) : 100=1.19%, 250=4.16%, 500=20.73%, 750=73.66%, 1000=0.25% 00:16:31.836 cpu : usr=0.08%, sys=0.55%, ctx=229, majf=0, minf=4097 00:16:31.836 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.7%, >=64=94.6% 00:16:31.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:31.836 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:31.836 issued rwts: total=1177,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:31.836 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:31.836 job9: (groupid=0, jobs=1): err= 0: pid=87219: Tue Nov 19 12:36:35 2024 00:16:31.836 read: IOPS=179, BW=44.8MiB/s (46.9MB/s)(453MiB/10112msec) 00:16:31.836 slat (usec): min=17, max=94019, avg=5414.68, stdev=13225.31 00:16:31.836 clat (msec): min=18, max=505, avg=351.36, stdev=63.60 00:16:31.836 lat (msec): min=19, max=505, avg=356.77, stdev=64.43 00:16:31.837 clat percentiles (msec): 00:16:31.837 | 1.00th=[ 65], 5.00th=[ 213], 10.00th=[ 296], 20.00th=[ 326], 00:16:31.837 | 30.00th=[ 342], 40.00th=[ 355], 50.00th=[ 359], 60.00th=[ 372], 00:16:31.837 | 70.00th=[ 380], 80.00th=[ 393], 90.00th=[ 409], 95.00th=[ 430], 00:16:31.837 | 99.00th=[ 464], 99.50th=[ 468], 99.90th=[ 506], 99.95th=[ 506], 00:16:31.837 | 99.99th=[ 506] 00:16:31.837 bw ( KiB/s): min=36352, max=50586, per=6.94%, avg=44722.60, stdev=3314.98, samples=20 00:16:31.837 iops : min= 142, max= 197, avg=174.55, stdev=12.91, samples=20 00:16:31.837 lat (msec) : 20=0.11%, 50=0.72%, 100=0.22%, 250=5.30%, 500=93.54% 00:16:31.837 lat (msec) : 750=0.11% 00:16:31.837 cpu : usr=0.06%, sys=0.89%, ctx=378, majf=0, minf=4097 00:16:31.837 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.8%, >=64=96.5% 00:16:31.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:31.837 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:31.837 issued rwts: total=1811,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:31.837 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:31.837 job10: (groupid=0, jobs=1): err= 0: pid=87220: Tue Nov 19 12:36:35 2024 00:16:31.837 read: IOPS=191, BW=47.9MiB/s (50.2MB/s)(484MiB/10105msec) 00:16:31.837 slat (usec): min=19, max=79830, avg=5160.55, stdev=12303.26 00:16:31.837 clat (msec): min=45, max=475, avg=328.48, stdev=84.27 00:16:31.837 lat (msec): min=46, max=500, avg=333.64, stdev=85.51 00:16:31.837 clat percentiles (msec): 00:16:31.837 | 1.00th=[ 91], 5.00th=[ 144], 10.00th=[ 184], 20.00th=[ 271], 00:16:31.837 | 30.00th=[ 330], 40.00th=[ 342], 50.00th=[ 355], 60.00th=[ 363], 00:16:31.837 | 70.00th=[ 376], 80.00th=[ 388], 90.00th=[ 409], 95.00th=[ 422], 00:16:31.837 | 99.00th=[ 447], 99.50th=[ 468], 99.90th=[ 477], 99.95th=[ 477], 00:16:31.837 | 99.99th=[ 477] 00:16:31.837 bw ( KiB/s): min=37888, max=90954, per=7.44%, avg=47935.95, stdev=12381.42, samples=20 00:16:31.837 iops : min= 148, max= 355, avg=187.15, stdev=48.28, samples=20 00:16:31.837 lat (msec) : 50=0.36%, 100=0.98%, 250=16.89%, 500=81.77% 00:16:31.837 cpu : usr=0.15%, sys=0.85%, ctx=389, majf=0, minf=4097 00:16:31.837 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:16:31.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:31.837 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:31.837 issued rwts: total=1936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:31.837 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:31.837 00:16:31.837 Run status group 0 (all jobs): 00:16:31.837 READ: bw=629MiB/s (660MB/s), 28.0MiB/s-94.7MiB/s (29.4MB/s-99.3MB/s), io=6378MiB (6688MB), run=10061-10137msec 00:16:31.837 00:16:31.837 Disk stats (read/write): 00:16:31.837 nvme0n1: ios=7502/0, merge=0/0, ticks=1234434/0, in_queue=1234434, util=97.76% 00:16:31.837 nvme10n1: ios=7484/0, merge=0/0, ticks=1233561/0, in_queue=1233561, util=97.97% 00:16:31.837 nvme1n1: ios=2225/0, merge=0/0, ticks=1210072/0, in_queue=1210072, util=98.10% 00:16:31.837 nvme2n1: ios=3770/0, merge=0/0, ticks=1226972/0, in_queue=1226972, util=98.22% 00:16:31.837 nvme3n1: ios=2149/0, merge=0/0, ticks=1212789/0, in_queue=1212789, util=98.26% 00:16:31.837 nvme4n1: ios=2233/0, merge=0/0, ticks=1218364/0, in_queue=1218364, util=98.56% 00:16:31.837 nvme5n1: ios=7440/0, merge=0/0, ticks=1235155/0, in_queue=1235155, util=98.54% 00:16:31.837 nvme6n1: ios=7409/0, merge=0/0, ticks=1236959/0, in_queue=1236959, util=98.71% 00:16:31.837 nvme7n1: ios=2232/0, merge=0/0, ticks=1213961/0, in_queue=1213961, util=98.96% 00:16:31.837 nvme8n1: ios=3495/0, merge=0/0, ticks=1226780/0, in_queue=1226780, util=99.05% 00:16:31.837 nvme9n1: ios=3759/0, merge=0/0, ticks=1227677/0, in_queue=1227677, util=99.11% 00:16:31.837 12:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:16:31.837 [global] 00:16:31.837 thread=1 00:16:31.837 invalidate=1 00:16:31.837 rw=randwrite 00:16:31.837 time_based=1 00:16:31.837 runtime=10 00:16:31.837 ioengine=libaio 00:16:31.837 direct=1 00:16:31.837 bs=262144 00:16:31.837 iodepth=64 00:16:31.837 norandommap=1 00:16:31.837 numjobs=1 00:16:31.837 00:16:31.837 [job0] 00:16:31.837 filename=/dev/nvme0n1 00:16:31.837 [job1] 00:16:31.837 filename=/dev/nvme10n1 00:16:31.837 [job2] 00:16:31.837 filename=/dev/nvme1n1 00:16:31.837 [job3] 00:16:31.837 filename=/dev/nvme2n1 00:16:31.837 [job4] 00:16:31.837 filename=/dev/nvme3n1 00:16:31.837 [job5] 00:16:31.837 filename=/dev/nvme4n1 00:16:31.837 [job6] 00:16:31.837 filename=/dev/nvme5n1 00:16:31.837 [job7] 00:16:31.837 filename=/dev/nvme6n1 00:16:31.837 [job8] 00:16:31.837 filename=/dev/nvme7n1 00:16:31.837 [job9] 00:16:31.837 filename=/dev/nvme8n1 00:16:31.837 [job10] 00:16:31.837 filename=/dev/nvme9n1 00:16:31.837 Could not set queue depth (nvme0n1) 00:16:31.837 Could not set queue depth (nvme10n1) 00:16:31.837 Could not set queue depth (nvme1n1) 00:16:31.837 Could not set queue depth (nvme2n1) 00:16:31.837 Could not set queue depth (nvme3n1) 00:16:31.837 Could not set queue depth (nvme4n1) 00:16:31.837 Could not set queue depth (nvme5n1) 00:16:31.837 Could not set queue depth (nvme6n1) 00:16:31.837 Could not set queue depth (nvme7n1) 00:16:31.837 Could not set queue depth (nvme8n1) 00:16:31.837 Could not set queue depth (nvme9n1) 00:16:31.837 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:31.837 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:31.837 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:31.837 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:31.837 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:31.837 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:31.837 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:31.837 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:31.837 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:31.837 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:31.837 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:31.837 fio-3.35 00:16:31.837 Starting 11 threads 00:16:41.820 00:16:41.821 job0: (groupid=0, jobs=1): err= 0: pid=87414: Tue Nov 19 12:36:46 2024 00:16:41.821 write: IOPS=276, BW=69.0MiB/s (72.4MB/s)(701MiB/10161msec); 0 zone resets 00:16:41.821 slat (usec): min=17, max=27827, avg=3489.59, stdev=6153.98 00:16:41.821 clat (msec): min=21, max=388, avg=228.17, stdev=24.50 00:16:41.821 lat (msec): min=21, max=388, avg=231.66, stdev=24.06 00:16:41.821 clat percentiles (msec): 00:16:41.821 | 1.00th=[ 150], 5.00th=[ 209], 10.00th=[ 215], 20.00th=[ 218], 00:16:41.821 | 30.00th=[ 222], 40.00th=[ 226], 50.00th=[ 228], 60.00th=[ 230], 00:16:41.821 | 70.00th=[ 232], 80.00th=[ 234], 90.00th=[ 236], 95.00th=[ 257], 00:16:41.821 | 99.00th=[ 330], 99.50th=[ 334], 99.90th=[ 372], 99.95th=[ 388], 00:16:41.821 | 99.99th=[ 388] 00:16:41.821 bw ( KiB/s): min=57344, max=73728, per=8.99%, avg=70201.20, stdev=4216.22, samples=20 00:16:41.821 iops : min= 224, max= 288, avg=274.20, stdev=16.53, samples=20 00:16:41.821 lat (msec) : 50=0.07%, 100=0.43%, 250=93.19%, 500=6.31% 00:16:41.821 cpu : usr=0.48%, sys=0.90%, ctx=3149, majf=0, minf=1 00:16:41.821 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:16:41.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.821 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:41.821 issued rwts: total=0,2805,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:41.821 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:41.821 job1: (groupid=0, jobs=1): err= 0: pid=87415: Tue Nov 19 12:36:46 2024 00:16:41.821 write: IOPS=277, BW=69.3MiB/s (72.7MB/s)(704MiB/10159msec); 0 zone resets 00:16:41.821 slat (usec): min=16, max=28051, avg=3504.91, stdev=6162.19 00:16:41.821 clat (msec): min=21, max=395, avg=227.28, stdev=28.02 00:16:41.821 lat (msec): min=21, max=395, avg=230.78, stdev=27.81 00:16:41.821 clat percentiles (msec): 00:16:41.821 | 1.00th=[ 103], 5.00th=[ 209], 10.00th=[ 213], 20.00th=[ 218], 00:16:41.821 | 30.00th=[ 222], 40.00th=[ 226], 50.00th=[ 228], 60.00th=[ 230], 00:16:41.821 | 70.00th=[ 232], 80.00th=[ 234], 90.00th=[ 236], 95.00th=[ 253], 00:16:41.821 | 99.00th=[ 330], 99.50th=[ 338], 99.90th=[ 380], 99.95th=[ 397], 00:16:41.821 | 99.99th=[ 397] 00:16:41.821 bw ( KiB/s): min=55296, max=75776, per=9.02%, avg=70476.80, stdev=4046.10, samples=20 00:16:41.821 iops : min= 216, max= 296, avg=275.30, stdev=15.81, samples=20 00:16:41.821 lat (msec) : 50=0.43%, 100=0.57%, 250=93.29%, 500=5.72% 00:16:41.821 cpu : usr=0.58%, sys=0.76%, ctx=3752, majf=0, minf=1 00:16:41.821 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:16:41.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.821 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:41.821 issued rwts: total=0,2816,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:41.821 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:41.821 job2: (groupid=0, jobs=1): err= 0: pid=87427: Tue Nov 19 12:36:46 2024 00:16:41.821 write: IOPS=177, BW=44.5MiB/s (46.6MB/s)(454MiB/10208msec); 0 zone resets 00:16:41.821 slat (usec): min=18, max=54722, avg=5384.63, stdev=9667.08 00:16:41.821 clat (msec): min=22, max=557, avg=354.39, stdev=38.09 00:16:41.821 lat (msec): min=22, max=557, avg=359.77, stdev=37.72 00:16:41.821 clat percentiles (msec): 00:16:41.821 | 1.00th=[ 194], 5.00th=[ 284], 10.00th=[ 330], 20.00th=[ 347], 00:16:41.821 | 30.00th=[ 351], 40.00th=[ 359], 50.00th=[ 363], 60.00th=[ 368], 00:16:41.821 | 70.00th=[ 372], 80.00th=[ 372], 90.00th=[ 380], 95.00th=[ 384], 00:16:41.821 | 99.00th=[ 443], 99.50th=[ 510], 99.90th=[ 558], 99.95th=[ 558], 00:16:41.821 | 99.99th=[ 558] 00:16:41.821 bw ( KiB/s): min=43008, max=47104, per=5.74%, avg=44820.85, stdev=1435.39, samples=20 00:16:41.821 iops : min= 168, max= 184, avg=175.05, stdev= 5.56, samples=20 00:16:41.821 lat (msec) : 50=0.11%, 250=2.09%, 500=97.25%, 750=0.55% 00:16:41.821 cpu : usr=0.35%, sys=0.56%, ctx=1825, majf=0, minf=1 00:16:41.821 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.8%, >=64=96.5% 00:16:41.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.821 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:41.821 issued rwts: total=0,1815,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:41.821 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:41.821 job3: (groupid=0, jobs=1): err= 0: pid=87428: Tue Nov 19 12:36:46 2024 00:16:41.821 write: IOPS=182, BW=45.6MiB/s (47.8MB/s)(466MiB/10218msec); 0 zone resets 00:16:41.821 slat (usec): min=20, max=41155, avg=5344.86, stdev=9509.09 00:16:41.821 clat (msec): min=13, max=582, avg=345.12, stdev=53.80 00:16:41.821 lat (msec): min=13, max=582, avg=350.46, stdev=53.98 00:16:41.821 clat percentiles (msec): 00:16:41.821 | 1.00th=[ 74], 5.00th=[ 262], 10.00th=[ 300], 20.00th=[ 338], 00:16:41.821 | 30.00th=[ 347], 40.00th=[ 351], 50.00th=[ 355], 60.00th=[ 363], 00:16:41.821 | 70.00th=[ 368], 80.00th=[ 372], 90.00th=[ 372], 95.00th=[ 376], 00:16:41.821 | 99.00th=[ 464], 99.50th=[ 535], 99.90th=[ 584], 99.95th=[ 584], 00:16:41.821 | 99.99th=[ 584] 00:16:41.821 bw ( KiB/s): min=43008, max=59511, per=5.91%, avg=46137.15, stdev=3604.90, samples=20 00:16:41.821 iops : min= 168, max= 232, avg=180.20, stdev=13.99, samples=20 00:16:41.821 lat (msec) : 20=0.05%, 50=0.43%, 100=0.86%, 250=2.57%, 500=95.34% 00:16:41.821 lat (msec) : 750=0.75% 00:16:41.821 cpu : usr=0.39%, sys=0.53%, ctx=1348, majf=0, minf=1 00:16:41.821 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:16:41.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.821 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:41.821 issued rwts: total=0,1865,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:41.821 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:41.821 job4: (groupid=0, jobs=1): err= 0: pid=87429: Tue Nov 19 12:36:46 2024 00:16:41.821 write: IOPS=273, BW=68.3MiB/s (71.6MB/s)(694MiB/10159msec); 0 zone resets 00:16:41.821 slat (usec): min=18, max=128103, avg=3599.07, stdev=6696.08 00:16:41.821 clat (msec): min=128, max=387, avg=230.43, stdev=25.25 00:16:41.821 lat (msec): min=128, max=387, avg=234.03, stdev=24.75 00:16:41.821 clat percentiles (msec): 00:16:41.821 | 1.00th=[ 182], 5.00th=[ 211], 10.00th=[ 215], 20.00th=[ 220], 00:16:41.821 | 30.00th=[ 222], 40.00th=[ 226], 50.00th=[ 228], 60.00th=[ 230], 00:16:41.821 | 70.00th=[ 232], 80.00th=[ 234], 90.00th=[ 236], 95.00th=[ 266], 00:16:41.821 | 99.00th=[ 359], 99.50th=[ 363], 99.90th=[ 372], 99.95th=[ 388], 00:16:41.821 | 99.99th=[ 388] 00:16:41.821 bw ( KiB/s): min=49250, max=73728, per=8.89%, avg=69457.70, stdev=6424.37, samples=20 00:16:41.821 iops : min= 192, max= 288, avg=271.30, stdev=25.16, samples=20 00:16:41.821 lat (msec) : 250=93.23%, 500=6.77% 00:16:41.821 cpu : usr=0.55%, sys=0.80%, ctx=3406, majf=0, minf=1 00:16:41.821 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:16:41.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.821 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:41.821 issued rwts: total=0,2776,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:41.821 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:41.821 job5: (groupid=0, jobs=1): err= 0: pid=87430: Tue Nov 19 12:36:46 2024 00:16:41.821 write: IOPS=317, BW=79.4MiB/s (83.3MB/s)(807MiB/10167msec); 0 zone resets 00:16:41.821 slat (usec): min=15, max=32433, avg=3062.61, stdev=5730.58 00:16:41.821 clat (msec): min=8, max=401, avg=198.36, stdev=65.57 00:16:41.821 lat (msec): min=8, max=401, avg=201.43, stdev=66.36 00:16:41.821 clat percentiles (msec): 00:16:41.821 | 1.00th=[ 33], 5.00th=[ 66], 10.00th=[ 70], 20.00th=[ 207], 00:16:41.821 | 30.00th=[ 218], 40.00th=[ 220], 50.00th=[ 226], 60.00th=[ 228], 00:16:41.821 | 70.00th=[ 230], 80.00th=[ 232], 90.00th=[ 234], 95.00th=[ 236], 00:16:41.821 | 99.00th=[ 313], 99.50th=[ 342], 99.90th=[ 388], 99.95th=[ 401], 00:16:41.821 | 99.99th=[ 401] 00:16:41.821 bw ( KiB/s): min=69632, max=235520, per=10.38%, avg=81119.50, stdev=36885.57, samples=20 00:16:41.821 iops : min= 272, max= 920, avg=316.50, stdev=144.15, samples=20 00:16:41.821 lat (msec) : 10=0.12%, 20=0.37%, 50=1.46%, 100=16.04%, 250=78.82% 00:16:41.821 lat (msec) : 500=3.19% 00:16:41.821 cpu : usr=0.55%, sys=0.97%, ctx=4019, majf=0, minf=1 00:16:41.821 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.0% 00:16:41.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.821 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:41.821 issued rwts: total=0,3229,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:41.821 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:41.821 job6: (groupid=0, jobs=1): err= 0: pid=87431: Tue Nov 19 12:36:46 2024 00:16:41.821 write: IOPS=188, BW=47.2MiB/s (49.5MB/s)(482MiB/10219msec); 0 zone resets 00:16:41.821 slat (usec): min=16, max=24985, avg=5019.66, stdev=9175.35 00:16:41.821 clat (msec): min=14, max=581, avg=333.84, stdev=64.61 00:16:41.821 lat (msec): min=14, max=581, avg=338.86, stdev=65.31 00:16:41.821 clat percentiles (msec): 00:16:41.821 | 1.00th=[ 68], 5.00th=[ 201], 10.00th=[ 249], 20.00th=[ 330], 00:16:41.821 | 30.00th=[ 342], 40.00th=[ 347], 50.00th=[ 351], 60.00th=[ 359], 00:16:41.821 | 70.00th=[ 363], 80.00th=[ 368], 90.00th=[ 372], 95.00th=[ 376], 00:16:41.821 | 99.00th=[ 464], 99.50th=[ 535], 99.90th=[ 584], 99.95th=[ 584], 00:16:41.821 | 99.99th=[ 584] 00:16:41.821 bw ( KiB/s): min=43008, max=70144, per=6.12%, avg=47769.60, stdev=7376.88, samples=20 00:16:41.821 iops : min= 168, max= 274, avg=186.60, stdev=28.82, samples=20 00:16:41.821 lat (msec) : 20=0.21%, 50=0.41%, 100=0.93%, 250=8.55%, 500=89.17% 00:16:41.821 lat (msec) : 750=0.73% 00:16:41.821 cpu : usr=0.32%, sys=0.67%, ctx=2384, majf=0, minf=1 00:16:41.821 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:16:41.821 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.821 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:41.821 issued rwts: total=0,1929,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:41.821 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:41.821 job7: (groupid=0, jobs=1): err= 0: pid=87432: Tue Nov 19 12:36:46 2024 00:16:41.821 write: IOPS=182, BW=45.7MiB/s (47.9MB/s)(466MiB/10210msec); 0 zone resets 00:16:41.822 slat (usec): min=17, max=77182, avg=5288.87, stdev=9537.71 00:16:41.822 clat (msec): min=77, max=569, avg=344.91, stdev=45.06 00:16:41.822 lat (msec): min=77, max=569, avg=350.20, stdev=45.05 00:16:41.822 clat percentiles (msec): 00:16:41.822 | 1.00th=[ 140], 5.00th=[ 266], 10.00th=[ 305], 20.00th=[ 338], 00:16:41.822 | 30.00th=[ 342], 40.00th=[ 347], 50.00th=[ 351], 60.00th=[ 363], 00:16:41.822 | 70.00th=[ 363], 80.00th=[ 368], 90.00th=[ 372], 95.00th=[ 376], 00:16:41.822 | 99.00th=[ 456], 99.50th=[ 523], 99.90th=[ 567], 99.95th=[ 567], 00:16:41.822 | 99.99th=[ 567] 00:16:41.822 bw ( KiB/s): min=43008, max=53760, per=5.91%, avg=46131.20, stdev=2662.50, samples=20 00:16:41.822 iops : min= 168, max= 210, avg=180.20, stdev=10.40, samples=20 00:16:41.822 lat (msec) : 100=0.38%, 250=3.11%, 500=95.98%, 750=0.54% 00:16:41.822 cpu : usr=0.34%, sys=0.59%, ctx=1702, majf=0, minf=1 00:16:41.822 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:16:41.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.822 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:41.822 issued rwts: total=0,1865,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:41.822 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:41.822 job8: (groupid=0, jobs=1): err= 0: pid=87433: Tue Nov 19 12:36:46 2024 00:16:41.822 write: IOPS=184, BW=46.0MiB/s (48.2MB/s)(470MiB/10216msec); 0 zone resets 00:16:41.822 slat (usec): min=20, max=39332, avg=5198.57, stdev=9354.42 00:16:41.822 clat (msec): min=21, max=571, avg=342.41, stdev=54.71 00:16:41.822 lat (msec): min=21, max=572, avg=347.61, stdev=55.00 00:16:41.822 clat percentiles (msec): 00:16:41.822 | 1.00th=[ 70], 5.00th=[ 249], 10.00th=[ 279], 20.00th=[ 338], 00:16:41.822 | 30.00th=[ 347], 40.00th=[ 351], 50.00th=[ 355], 60.00th=[ 359], 00:16:41.822 | 70.00th=[ 363], 80.00th=[ 368], 90.00th=[ 372], 95.00th=[ 376], 00:16:41.822 | 99.00th=[ 456], 99.50th=[ 527], 99.90th=[ 575], 99.95th=[ 575], 00:16:41.822 | 99.99th=[ 575] 00:16:41.822 bw ( KiB/s): min=43008, max=63488, per=5.95%, avg=46489.60, stdev=4412.53, samples=20 00:16:41.822 iops : min= 168, max= 248, avg=181.60, stdev=17.24, samples=20 00:16:41.822 lat (msec) : 50=0.53%, 100=0.96%, 250=3.78%, 500=93.99%, 750=0.74% 00:16:41.822 cpu : usr=0.38%, sys=0.62%, ctx=1754, majf=0, minf=1 00:16:41.822 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:16:41.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.822 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:41.822 issued rwts: total=0,1880,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:41.822 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:41.822 job9: (groupid=0, jobs=1): err= 0: pid=87434: Tue Nov 19 12:36:46 2024 00:16:41.822 write: IOPS=505, BW=126MiB/s (132MB/s)(1276MiB/10105msec); 0 zone resets 00:16:41.822 slat (usec): min=16, max=10550, avg=1953.85, stdev=3346.47 00:16:41.822 clat (msec): min=11, max=231, avg=124.69, stdev=11.84 00:16:41.822 lat (msec): min=11, max=231, avg=126.64, stdev=11.56 00:16:41.822 clat percentiles (msec): 00:16:41.822 | 1.00th=[ 78], 5.00th=[ 115], 10.00th=[ 120], 20.00th=[ 122], 00:16:41.822 | 30.00th=[ 123], 40.00th=[ 124], 50.00th=[ 127], 60.00th=[ 128], 00:16:41.822 | 70.00th=[ 129], 80.00th=[ 130], 90.00th=[ 132], 95.00th=[ 133], 00:16:41.822 | 99.00th=[ 136], 99.50th=[ 180], 99.90th=[ 224], 99.95th=[ 224], 00:16:41.822 | 99.99th=[ 232] 00:16:41.822 bw ( KiB/s): min=122880, max=135168, per=16.52%, avg=129075.20, stdev=3503.80, samples=20 00:16:41.822 iops : min= 480, max= 528, avg=504.20, stdev=13.69, samples=20 00:16:41.822 lat (msec) : 20=0.24%, 50=0.31%, 100=0.98%, 250=98.47% 00:16:41.822 cpu : usr=0.93%, sys=1.43%, ctx=4608, majf=0, minf=1 00:16:41.822 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:16:41.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.822 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:41.822 issued rwts: total=0,5105,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:41.822 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:41.822 job10: (groupid=0, jobs=1): err= 0: pid=87435: Tue Nov 19 12:36:46 2024 00:16:41.822 write: IOPS=504, BW=126MiB/s (132MB/s)(1275MiB/10096msec); 0 zone resets 00:16:41.822 slat (usec): min=16, max=9782, avg=1955.74, stdev=3341.08 00:16:41.822 clat (msec): min=12, max=224, avg=124.73, stdev=11.23 00:16:41.822 lat (msec): min=12, max=224, avg=126.68, stdev=10.92 00:16:41.822 clat percentiles (msec): 00:16:41.822 | 1.00th=[ 80], 5.00th=[ 115], 10.00th=[ 118], 20.00th=[ 121], 00:16:41.822 | 30.00th=[ 123], 40.00th=[ 125], 50.00th=[ 127], 60.00th=[ 128], 00:16:41.822 | 70.00th=[ 129], 80.00th=[ 130], 90.00th=[ 132], 95.00th=[ 133], 00:16:41.822 | 99.00th=[ 136], 99.50th=[ 174], 99.90th=[ 218], 99.95th=[ 218], 00:16:41.822 | 99.99th=[ 226] 00:16:41.822 bw ( KiB/s): min=123392, max=135680, per=16.50%, avg=128882.90, stdev=3288.17, samples=20 00:16:41.822 iops : min= 482, max= 530, avg=503.40, stdev=12.82, samples=20 00:16:41.822 lat (msec) : 20=0.08%, 50=0.47%, 100=0.75%, 250=98.71% 00:16:41.822 cpu : usr=0.89%, sys=1.58%, ctx=6080, majf=0, minf=1 00:16:41.822 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:16:41.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.822 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:41.822 issued rwts: total=0,5098,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:41.822 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:41.822 00:16:41.822 Run status group 0 (all jobs): 00:16:41.822 WRITE: bw=763MiB/s (800MB/s), 44.5MiB/s-126MiB/s (46.6MB/s-132MB/s), io=7796MiB (8174MB), run=10096-10219msec 00:16:41.822 00:16:41.822 Disk stats (read/write): 00:16:41.822 nvme0n1: ios=49/5478, merge=0/0, ticks=33/1207595, in_queue=1207628, util=97.77% 00:16:41.822 nvme10n1: ios=49/5506, merge=0/0, ticks=49/1208853, in_queue=1208902, util=98.11% 00:16:41.822 nvme1n1: ios=41/3497, merge=0/0, ticks=33/1200352, in_queue=1200385, util=98.05% 00:16:41.822 nvme2n1: ios=34/3606, merge=0/0, ticks=57/1202043, in_queue=1202100, util=98.29% 00:16:41.822 nvme3n1: ios=27/5418, merge=0/0, ticks=38/1207629, in_queue=1207667, util=98.12% 00:16:41.822 nvme4n1: ios=20/6336, merge=0/0, ticks=144/1210698, in_queue=1210842, util=98.49% 00:16:41.822 nvme5n1: ios=0/3735, merge=0/0, ticks=0/1203832, in_queue=1203832, util=98.53% 00:16:41.822 nvme6n1: ios=0/3602, merge=0/0, ticks=0/1201316, in_queue=1201316, util=98.45% 00:16:41.822 nvme7n1: ios=0/3633, merge=0/0, ticks=0/1202226, in_queue=1202226, util=98.72% 00:16:41.822 nvme8n1: ios=0/10082, merge=0/0, ticks=0/1215043, in_queue=1215043, util=98.89% 00:16:41.822 nvme9n1: ios=0/10054, merge=0/0, ticks=0/1212632, in_queue=1212632, util=98.76% 00:16:41.822 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:16:41.822 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:16:41.822 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:41.822 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:41.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:41.822 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:16:41.822 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:41.822 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:16:41.822 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:41.822 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:16:41.822 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:41.822 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:41.822 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:41.822 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.822 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:41.822 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.822 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:41.822 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:16:41.822 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:16:41.822 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:16:41.822 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:41.822 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:41.822 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:16:41.822 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:41.822 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:16:41.822 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:41.822 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:41.822 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.822 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:41.822 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.822 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:41.822 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:16:41.822 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:16:41.822 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:16:41.822 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:41.822 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:41.822 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:16:41.822 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:41.822 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:16:41.823 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:16:41.823 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:16:41.823 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:16:41.823 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:16:41.823 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:16:41.823 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:16:41.823 12:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:41.823 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:41.823 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:16:41.823 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:41.823 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:16:41.823 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.823 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:41.823 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.823 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:41.823 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:16:41.823 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:16:41.823 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:16:41.823 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:41.823 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:41.823 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:16:42.082 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:42.082 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:16:42.082 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:42.082 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:16:42.082 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.083 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:42.083 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.083 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:42.083 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:16:42.083 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:16:42.083 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:16:42.083 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:42.083 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:42.083 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:16:42.083 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:42.083 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:16:42.083 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:42.083 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:16:42.083 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.083 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:42.083 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.083 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:16:42.083 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:16:42.083 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:16:42.083 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:42.083 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:16:42.083 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:42.083 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:16:42.083 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:42.083 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:42.083 rmmod nvme_tcp 00:16:42.083 rmmod nvme_fabrics 00:16:42.342 rmmod nvme_keyring 00:16:42.342 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:42.342 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:16:42.342 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:16:42.342 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@513 -- # '[' -n 86756 ']' 00:16:42.342 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@514 -- # killprocess 86756 00:16:42.342 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 86756 ']' 00:16:42.342 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 86756 00:16:42.342 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:16:42.342 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:42.342 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86756 00:16:42.342 killing process with pid 86756 00:16:42.342 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:42.342 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:42.342 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86756' 00:16:42.342 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 86756 00:16:42.342 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 86756 00:16:42.601 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:42.601 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:42.601 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:42.601 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:16:42.601 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # iptables-save 00:16:42.601 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:42.601 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # iptables-restore 00:16:42.601 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:42.601 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:42.601 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:42.601 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:42.601 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:42.601 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:42.601 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:42.601 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:42.601 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:42.601 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:42.601 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:42.601 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:42.601 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:42.861 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:42.861 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:42.861 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:42.861 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:42.861 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:42.861 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:42.861 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@300 -- # return 0 00:16:42.861 00:16:42.861 real 0m48.757s 00:16:42.861 user 2m47.455s 00:16:42.861 sys 0m25.445s 00:16:42.861 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:42.861 ************************************ 00:16:42.861 END TEST nvmf_multiconnection 00:16:42.861 ************************************ 00:16:42.861 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:42.861 12:36:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:16:42.861 12:36:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:42.861 12:36:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:42.861 12:36:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:42.861 ************************************ 00:16:42.861 START TEST nvmf_initiator_timeout 00:16:42.861 ************************************ 00:16:42.861 12:36:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:16:42.861 * Looking for test storage... 00:16:42.861 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:42.861 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:42.861 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lcov --version 00:16:42.861 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:43.121 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:43.121 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:43.121 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:43.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.122 --rc genhtml_branch_coverage=1 00:16:43.122 --rc genhtml_function_coverage=1 00:16:43.122 --rc genhtml_legend=1 00:16:43.122 --rc geninfo_all_blocks=1 00:16:43.122 --rc geninfo_unexecuted_blocks=1 00:16:43.122 00:16:43.122 ' 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:43.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.122 --rc genhtml_branch_coverage=1 00:16:43.122 --rc genhtml_function_coverage=1 00:16:43.122 --rc genhtml_legend=1 00:16:43.122 --rc geninfo_all_blocks=1 00:16:43.122 --rc geninfo_unexecuted_blocks=1 00:16:43.122 00:16:43.122 ' 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:43.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.122 --rc genhtml_branch_coverage=1 00:16:43.122 --rc genhtml_function_coverage=1 00:16:43.122 --rc genhtml_legend=1 00:16:43.122 --rc geninfo_all_blocks=1 00:16:43.122 --rc geninfo_unexecuted_blocks=1 00:16:43.122 00:16:43.122 ' 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:43.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.122 --rc genhtml_branch_coverage=1 00:16:43.122 --rc genhtml_function_coverage=1 00:16:43.122 --rc genhtml_legend=1 00:16:43.122 --rc geninfo_all_blocks=1 00:16:43.122 --rc geninfo_unexecuted_blocks=1 00:16:43.122 00:16:43.122 ' 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.122 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:43.123 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@456 -- # nvmf_veth_init 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:43.123 Cannot find device "nvmf_init_br" 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:43.123 Cannot find device "nvmf_init_br2" 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:43.123 Cannot find device "nvmf_tgt_br" 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # true 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:43.123 Cannot find device "nvmf_tgt_br2" 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # true 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:43.123 Cannot find device "nvmf_init_br" 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # true 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:43.123 Cannot find device "nvmf_init_br2" 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # true 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:43.123 Cannot find device "nvmf_tgt_br" 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # true 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:43.123 Cannot find device "nvmf_tgt_br2" 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # true 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:43.123 Cannot find device "nvmf_br" 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # true 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:43.123 Cannot find device "nvmf_init_if" 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # true 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:43.123 Cannot find device "nvmf_init_if2" 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # true 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:43.123 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # true 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:43.123 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # true 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:43.123 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:43.383 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:43.383 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:43.383 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:43.383 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:43.383 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:43.383 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:43.383 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:43.383 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:43.383 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:43.383 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:43.383 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:43.383 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:43.383 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:43.383 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:43.383 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:43.383 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:43.383 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:43.383 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:43.383 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:43.383 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:43.383 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:43.383 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:43.383 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:43.383 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:43.383 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:43.383 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:43.383 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:43.383 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:43.383 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:43.383 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:43.383 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:43.383 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:43.383 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:16:43.383 00:16:43.383 --- 10.0.0.3 ping statistics --- 00:16:43.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.383 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:16:43.383 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:43.383 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:43.383 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms 00:16:43.383 00:16:43.383 --- 10.0.0.4 ping statistics --- 00:16:43.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.383 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:16:43.383 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:43.383 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:43.383 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:16:43.383 00:16:43.383 --- 10.0.0.1 ping statistics --- 00:16:43.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.383 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:16:43.383 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:43.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:43.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:16:43.383 00:16:43.383 --- 10.0.0.2 ping statistics --- 00:16:43.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.383 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:16:43.384 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:43.384 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@457 -- # return 0 00:16:43.384 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:43.384 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:43.384 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:43.384 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:43.384 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:43.384 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:43.384 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:43.643 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:16:43.643 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:43.643 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:43.643 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:43.643 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@505 -- # nvmfpid=87861 00:16:43.643 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:43.643 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@506 -- # waitforlisten 87861 00:16:43.643 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 87861 ']' 00:16:43.643 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.643 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:43.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.643 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.643 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:43.643 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:43.643 [2024-11-19 12:36:48.716289] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:43.643 [2024-11-19 12:36:48.716390] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:43.643 [2024-11-19 12:36:48.854212] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:43.643 [2024-11-19 12:36:48.895010] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:43.643 [2024-11-19 12:36:48.895075] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:43.643 [2024-11-19 12:36:48.895094] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:43.643 [2024-11-19 12:36:48.895124] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:43.643 [2024-11-19 12:36:48.895133] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:43.643 [2024-11-19 12:36:48.895299] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:43.643 [2024-11-19 12:36:48.895416] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:43.643 [2024-11-19 12:36:48.896038] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:16:43.643 [2024-11-19 12:36:48.896090] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.903 [2024-11-19 12:36:48.929039] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:43.903 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:43.903 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:16:43.903 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:43.903 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:43.903 12:36:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:43.903 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:43.903 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:43.903 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:43.903 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.903 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:43.903 Malloc0 00:16:43.903 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.903 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:16:43.903 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.903 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:43.903 Delay0 00:16:43.903 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.903 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:43.903 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.903 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:43.903 [2024-11-19 12:36:49.056338] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:43.903 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.903 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:43.903 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.903 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:43.903 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.903 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:43.903 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.903 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:43.903 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.903 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:43.903 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.903 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:43.903 [2024-11-19 12:36:49.084596] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:43.903 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.903 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid=bae1b18f-cc14-461e-aa63-e888be1a2cc9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:44.160 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:16:44.160 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:16:44.160 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:44.160 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:44.160 12:36:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:16:46.065 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:46.065 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:46.065 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:46.065 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:46.065 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:46.065 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:16:46.065 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=87918 00:16:46.065 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:16:46.065 12:36:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:16:46.065 [global] 00:16:46.065 thread=1 00:16:46.065 invalidate=1 00:16:46.065 rw=write 00:16:46.065 time_based=1 00:16:46.065 runtime=60 00:16:46.065 ioengine=libaio 00:16:46.065 direct=1 00:16:46.065 bs=4096 00:16:46.065 iodepth=1 00:16:46.065 norandommap=0 00:16:46.065 numjobs=1 00:16:46.065 00:16:46.065 verify_dump=1 00:16:46.065 verify_backlog=512 00:16:46.065 verify_state_save=0 00:16:46.065 do_verify=1 00:16:46.065 verify=crc32c-intel 00:16:46.065 [job0] 00:16:46.065 filename=/dev/nvme0n1 00:16:46.065 Could not set queue depth (nvme0n1) 00:16:46.325 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:46.325 fio-3.35 00:16:46.325 Starting 1 thread 00:16:49.614 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:16:49.614 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.614 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:49.614 true 00:16:49.614 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.614 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:16:49.614 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.614 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:49.614 true 00:16:49.614 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.614 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:16:49.614 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.614 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:49.614 true 00:16:49.614 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.614 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:16:49.614 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.614 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:49.614 true 00:16:49.614 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.614 12:36:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:16:52.147 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:16:52.147 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.147 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:52.147 true 00:16:52.147 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.147 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:16:52.147 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.147 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:52.147 true 00:16:52.147 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.147 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:16:52.147 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.147 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:52.147 true 00:16:52.147 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.147 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:16:52.147 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.147 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:52.147 true 00:16:52.147 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.147 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:16:52.147 12:36:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 87918 00:17:48.480 00:17:48.480 job0: (groupid=0, jobs=1): err= 0: pid=87939: Tue Nov 19 12:37:51 2024 00:17:48.480 read: IOPS=807, BW=3232KiB/s (3309kB/s)(189MiB/60000msec) 00:17:48.480 slat (usec): min=10, max=9855, avg=14.14, stdev=59.82 00:17:48.480 clat (usec): min=97, max=40818k, avg=1044.42, stdev=185391.71 00:17:48.480 lat (usec): min=165, max=40818k, avg=1058.57, stdev=185391.71 00:17:48.480 clat percentiles (usec): 00:17:48.480 | 1.00th=[ 165], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 184], 00:17:48.481 | 30.00th=[ 188], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 204], 00:17:48.481 | 70.00th=[ 212], 80.00th=[ 221], 90.00th=[ 233], 95.00th=[ 243], 00:17:48.481 | 99.00th=[ 269], 99.50th=[ 277], 99.90th=[ 326], 99.95th=[ 433], 00:17:48.481 | 99.99th=[ 1123] 00:17:48.481 write: IOPS=810, BW=3243KiB/s (3320kB/s)(190MiB/60000msec); 0 zone resets 00:17:48.481 slat (usec): min=12, max=546, avg=20.11, stdev= 7.38 00:17:48.481 clat (usec): min=44, max=4063, avg=155.46, stdev=32.30 00:17:48.481 lat (usec): min=131, max=4092, avg=175.57, stdev=33.72 00:17:48.481 clat percentiles (usec): 00:17:48.481 | 1.00th=[ 121], 5.00th=[ 126], 10.00th=[ 130], 20.00th=[ 137], 00:17:48.481 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 153], 60.00th=[ 157], 00:17:48.481 | 70.00th=[ 165], 80.00th=[ 172], 90.00th=[ 186], 95.00th=[ 196], 00:17:48.481 | 99.00th=[ 219], 99.50th=[ 227], 99.90th=[ 265], 99.95th=[ 519], 00:17:48.481 | 99.99th=[ 1123] 00:17:48.481 bw ( KiB/s): min= 4504, max=12288, per=100.00%, avg=10024.42, stdev=1553.69, samples=38 00:17:48.481 iops : min= 1126, max= 3072, avg=2506.11, stdev=388.42, samples=38 00:17:48.481 lat (usec) : 50=0.01%, 100=0.01%, 250=98.32%, 500=1.63%, 750=0.03% 00:17:48.481 lat (usec) : 1000=0.01% 00:17:48.481 lat (msec) : 2=0.01%, 10=0.01%, >=2000=0.01% 00:17:48.481 cpu : usr=0.55%, sys=2.17%, ctx=97136, majf=0, minf=5 00:17:48.481 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:48.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.481 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.481 issued rwts: total=48475,48640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.481 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:48.481 00:17:48.481 Run status group 0 (all jobs): 00:17:48.481 READ: bw=3232KiB/s (3309kB/s), 3232KiB/s-3232KiB/s (3309kB/s-3309kB/s), io=189MiB (199MB), run=60000-60000msec 00:17:48.481 WRITE: bw=3243KiB/s (3320kB/s), 3243KiB/s-3243KiB/s (3320kB/s-3320kB/s), io=190MiB (199MB), run=60000-60000msec 00:17:48.481 00:17:48.481 Disk stats (read/write): 00:17:48.481 nvme0n1: ios=48375/48477, merge=0/0, ticks=10223/8119, in_queue=18342, util=99.63% 00:17:48.481 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:48.481 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:48.481 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:48.481 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:17:48.481 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:48.481 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:48.481 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:48.481 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:48.481 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:17:48.481 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:17:48.481 nvmf hotplug test: fio successful as expected 00:17:48.481 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:17:48.481 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:48.481 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.481 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:48.481 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.481 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:17:48.481 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:17:48.481 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:17:48.481 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:48.481 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:17:48.481 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:48.481 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:17:48.481 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:48.481 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:48.481 rmmod nvme_tcp 00:17:48.481 rmmod nvme_fabrics 00:17:48.481 rmmod nvme_keyring 00:17:48.481 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:48.481 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:17:48.481 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:17:48.481 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@513 -- # '[' -n 87861 ']' 00:17:48.481 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@514 -- # killprocess 87861 00:17:48.481 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 87861 ']' 00:17:48.481 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 87861 00:17:48.481 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:17:48.481 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:48.481 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87861 00:17:48.481 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:48.481 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:48.481 killing process with pid 87861 00:17:48.481 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87861' 00:17:48.481 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 87861 00:17:48.481 12:37:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 87861 00:17:48.481 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:48.481 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:48.481 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:48.481 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:17:48.481 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # iptables-save 00:17:48.481 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:48.481 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # iptables-restore 00:17:48.481 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:48.481 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:48.481 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:48.481 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:48.481 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:48.481 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:48.481 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:48.481 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:48.481 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:48.481 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:48.481 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:48.481 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:48.481 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:48.481 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:48.481 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:48.481 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:48.481 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:48.481 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:48.481 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:48.481 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@300 -- # return 0 00:17:48.481 00:17:48.481 real 1m4.343s 00:17:48.481 user 3m49.307s 00:17:48.481 sys 0m22.247s 00:17:48.481 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:48.481 12:37:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:48.481 ************************************ 00:17:48.481 END TEST nvmf_initiator_timeout 00:17:48.481 ************************************ 00:17:48.481 12:37:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:17:48.481 12:37:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:48.481 00:17:48.481 real 6m49.574s 00:17:48.481 user 17m2.948s 00:17:48.481 sys 1m51.677s 00:17:48.481 12:37:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:48.481 12:37:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:48.481 ************************************ 00:17:48.481 END TEST nvmf_target_extra 00:17:48.481 ************************************ 00:17:48.482 12:37:52 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:17:48.482 12:37:52 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:48.482 12:37:52 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:48.482 12:37:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:48.482 ************************************ 00:17:48.482 START TEST nvmf_host 00:17:48.482 ************************************ 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:17:48.482 * Looking for test storage... 00:17:48.482 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:48.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.482 --rc genhtml_branch_coverage=1 00:17:48.482 --rc genhtml_function_coverage=1 00:17:48.482 --rc genhtml_legend=1 00:17:48.482 --rc geninfo_all_blocks=1 00:17:48.482 --rc geninfo_unexecuted_blocks=1 00:17:48.482 00:17:48.482 ' 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:48.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.482 --rc genhtml_branch_coverage=1 00:17:48.482 --rc genhtml_function_coverage=1 00:17:48.482 --rc genhtml_legend=1 00:17:48.482 --rc geninfo_all_blocks=1 00:17:48.482 --rc geninfo_unexecuted_blocks=1 00:17:48.482 00:17:48.482 ' 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:48.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.482 --rc genhtml_branch_coverage=1 00:17:48.482 --rc genhtml_function_coverage=1 00:17:48.482 --rc genhtml_legend=1 00:17:48.482 --rc geninfo_all_blocks=1 00:17:48.482 --rc geninfo_unexecuted_blocks=1 00:17:48.482 00:17:48.482 ' 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:48.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.482 --rc genhtml_branch_coverage=1 00:17:48.482 --rc genhtml_function_coverage=1 00:17:48.482 --rc genhtml_legend=1 00:17:48.482 --rc geninfo_all_blocks=1 00:17:48.482 --rc geninfo_unexecuted_blocks=1 00:17:48.482 00:17:48.482 ' 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:48.482 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.482 ************************************ 00:17:48.482 START TEST nvmf_identify 00:17:48.482 ************************************ 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:48.482 * Looking for test storage... 00:17:48.482 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:48.482 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:48.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.483 --rc genhtml_branch_coverage=1 00:17:48.483 --rc genhtml_function_coverage=1 00:17:48.483 --rc genhtml_legend=1 00:17:48.483 --rc geninfo_all_blocks=1 00:17:48.483 --rc geninfo_unexecuted_blocks=1 00:17:48.483 00:17:48.483 ' 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:48.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.483 --rc genhtml_branch_coverage=1 00:17:48.483 --rc genhtml_function_coverage=1 00:17:48.483 --rc genhtml_legend=1 00:17:48.483 --rc geninfo_all_blocks=1 00:17:48.483 --rc geninfo_unexecuted_blocks=1 00:17:48.483 00:17:48.483 ' 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:48.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.483 --rc genhtml_branch_coverage=1 00:17:48.483 --rc genhtml_function_coverage=1 00:17:48.483 --rc genhtml_legend=1 00:17:48.483 --rc geninfo_all_blocks=1 00:17:48.483 --rc geninfo_unexecuted_blocks=1 00:17:48.483 00:17:48.483 ' 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:48.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.483 --rc genhtml_branch_coverage=1 00:17:48.483 --rc genhtml_function_coverage=1 00:17:48.483 --rc genhtml_legend=1 00:17:48.483 --rc geninfo_all_blocks=1 00:17:48.483 --rc geninfo_unexecuted_blocks=1 00:17:48.483 00:17:48.483 ' 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:48.483 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:48.483 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@456 -- # nvmf_veth_init 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:48.484 Cannot find device "nvmf_init_br" 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:48.484 Cannot find device "nvmf_init_br2" 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:48.484 Cannot find device "nvmf_tgt_br" 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:48.484 Cannot find device "nvmf_tgt_br2" 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:48.484 Cannot find device "nvmf_init_br" 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:48.484 Cannot find device "nvmf_init_br2" 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:48.484 Cannot find device "nvmf_tgt_br" 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:48.484 Cannot find device "nvmf_tgt_br2" 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:48.484 Cannot find device "nvmf_br" 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:48.484 Cannot find device "nvmf_init_if" 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:48.484 Cannot find device "nvmf_init_if2" 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:48.484 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:48.484 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:48.484 12:37:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:48.484 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:48.484 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:48.484 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:48.484 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:48.484 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:48.484 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:48.484 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:48.484 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:48.484 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:48.484 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:48.484 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:48.484 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:48.484 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:48.484 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:48.484 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:48.484 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:48.484 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:48.484 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:48.484 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:48.484 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:48.484 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:48.484 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:48.484 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:48.484 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:48.484 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:48.484 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:48.484 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:48.484 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:48.484 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:48.484 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:17:48.484 00:17:48.484 --- 10.0.0.3 ping statistics --- 00:17:48.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.485 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:48.485 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:48.485 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:17:48.485 00:17:48.485 --- 10.0.0.4 ping statistics --- 00:17:48.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.485 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:48.485 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:48.485 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:17:48.485 00:17:48.485 --- 10.0.0.1 ping statistics --- 00:17:48.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.485 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:48.485 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:48.485 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:17:48.485 00:17:48.485 --- 10.0.0.2 ping statistics --- 00:17:48.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.485 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@457 -- # return 0 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:48.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=88878 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 88878 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 88878 ']' 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:48.485 [2024-11-19 12:37:53.305838] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:48.485 [2024-11-19 12:37:53.306107] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:48.485 [2024-11-19 12:37:53.453073] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:48.485 [2024-11-19 12:37:53.495078] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:48.485 [2024-11-19 12:37:53.495397] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:48.485 [2024-11-19 12:37:53.495585] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:48.485 [2024-11-19 12:37:53.495727] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:48.485 [2024-11-19 12:37:53.495741] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:48.485 [2024-11-19 12:37:53.495812] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:48.485 [2024-11-19 12:37:53.495942] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:48.485 [2024-11-19 12:37:53.496574] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:48.485 [2024-11-19 12:37:53.496627] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.485 [2024-11-19 12:37:53.529311] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:48.485 [2024-11-19 12:37:53.594777] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:48.485 Malloc0 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:48.485 [2024-11-19 12:37:53.686861] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.485 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:48.485 [ 00:17:48.485 { 00:17:48.485 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:48.485 "subtype": "Discovery", 00:17:48.485 "listen_addresses": [ 00:17:48.485 { 00:17:48.485 "trtype": "TCP", 00:17:48.485 "adrfam": "IPv4", 00:17:48.485 "traddr": "10.0.0.3", 00:17:48.485 "trsvcid": "4420" 00:17:48.485 } 00:17:48.485 ], 00:17:48.485 "allow_any_host": true, 00:17:48.485 "hosts": [] 00:17:48.485 }, 00:17:48.485 { 00:17:48.485 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:48.485 "subtype": "NVMe", 00:17:48.485 "listen_addresses": [ 00:17:48.485 { 00:17:48.485 "trtype": "TCP", 00:17:48.485 "adrfam": "IPv4", 00:17:48.485 "traddr": "10.0.0.3", 00:17:48.485 "trsvcid": "4420" 00:17:48.485 } 00:17:48.485 ], 00:17:48.485 "allow_any_host": true, 00:17:48.485 "hosts": [], 00:17:48.485 "serial_number": "SPDK00000000000001", 00:17:48.485 "model_number": "SPDK bdev Controller", 00:17:48.748 "max_namespaces": 32, 00:17:48.748 "min_cntlid": 1, 00:17:48.748 "max_cntlid": 65519, 00:17:48.748 "namespaces": [ 00:17:48.748 { 00:17:48.748 "nsid": 1, 00:17:48.748 "bdev_name": "Malloc0", 00:17:48.748 "name": "Malloc0", 00:17:48.748 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:17:48.748 "eui64": "ABCDEF0123456789", 00:17:48.748 "uuid": "b59cefb2-57e9-418d-932d-5fa668cef50b" 00:17:48.748 } 00:17:48.748 ] 00:17:48.748 } 00:17:48.748 ] 00:17:48.748 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.748 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:17:48.748 [2024-11-19 12:37:53.746739] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:48.748 [2024-11-19 12:37:53.746923] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88907 ] 00:17:48.748 [2024-11-19 12:37:53.890370] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:17:48.748 [2024-11-19 12:37:53.890445] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:48.748 [2024-11-19 12:37:53.890452] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:48.748 [2024-11-19 12:37:53.890463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:48.748 [2024-11-19 12:37:53.890472] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:48.748 [2024-11-19 12:37:53.890801] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:17:48.748 [2024-11-19 12:37:53.890869] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x4e6bd0 0 00:17:48.748 [2024-11-19 12:37:53.896708] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:48.748 [2024-11-19 12:37:53.896733] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:48.748 [2024-11-19 12:37:53.896755] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:48.748 [2024-11-19 12:37:53.896759] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:48.748 [2024-11-19 12:37:53.896794] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:48.748 [2024-11-19 12:37:53.896801] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:48.748 [2024-11-19 12:37:53.896806] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4e6bd0) 00:17:48.748 [2024-11-19 12:37:53.896820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:48.748 [2024-11-19 12:37:53.896851] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x52d0c0, cid 0, qid 0 00:17:48.748 [2024-11-19 12:37:53.903757] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:48.748 [2024-11-19 12:37:53.903777] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:48.748 [2024-11-19 12:37:53.903798] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:48.748 [2024-11-19 12:37:53.903804] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x52d0c0) on tqpair=0x4e6bd0 00:17:48.748 [2024-11-19 12:37:53.903819] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:48.748 [2024-11-19 12:37:53.903827] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:17:48.748 [2024-11-19 12:37:53.903834] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:17:48.748 [2024-11-19 12:37:53.903852] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:48.748 [2024-11-19 12:37:53.903857] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:48.748 [2024-11-19 12:37:53.903862] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4e6bd0) 00:17:48.748 [2024-11-19 12:37:53.903872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.748 [2024-11-19 12:37:53.903899] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x52d0c0, cid 0, qid 0 00:17:48.748 [2024-11-19 12:37:53.903960] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:48.748 [2024-11-19 12:37:53.903967] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:48.748 [2024-11-19 12:37:53.903971] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:48.748 [2024-11-19 12:37:53.903976] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x52d0c0) on tqpair=0x4e6bd0 00:17:48.748 [2024-11-19 12:37:53.903989] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:17:48.748 [2024-11-19 12:37:53.903998] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:17:48.748 [2024-11-19 12:37:53.904022] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:48.748 [2024-11-19 12:37:53.904027] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:48.748 [2024-11-19 12:37:53.904031] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4e6bd0) 00:17:48.748 [2024-11-19 12:37:53.904040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.748 [2024-11-19 12:37:53.904060] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x52d0c0, cid 0, qid 0 00:17:48.748 [2024-11-19 12:37:53.904122] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:48.748 [2024-11-19 12:37:53.904129] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:48.748 [2024-11-19 12:37:53.904133] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:48.748 [2024-11-19 12:37:53.904138] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x52d0c0) on tqpair=0x4e6bd0 00:17:48.748 [2024-11-19 12:37:53.904144] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:17:48.748 [2024-11-19 12:37:53.904153] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:17:48.748 [2024-11-19 12:37:53.904161] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:48.748 [2024-11-19 12:37:53.904165] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:48.748 [2024-11-19 12:37:53.904169] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4e6bd0) 00:17:48.748 [2024-11-19 12:37:53.904177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.748 [2024-11-19 12:37:53.904195] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x52d0c0, cid 0, qid 0 00:17:48.748 [2024-11-19 12:37:53.904245] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:48.748 [2024-11-19 12:37:53.904253] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:48.748 [2024-11-19 12:37:53.904257] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:48.748 [2024-11-19 12:37:53.904261] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x52d0c0) on tqpair=0x4e6bd0 00:17:48.748 [2024-11-19 12:37:53.904268] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:48.748 [2024-11-19 12:37:53.904278] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:48.748 [2024-11-19 12:37:53.904283] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:48.748 [2024-11-19 12:37:53.904287] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4e6bd0) 00:17:48.748 [2024-11-19 12:37:53.904295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.748 [2024-11-19 12:37:53.904313] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x52d0c0, cid 0, qid 0 00:17:48.748 [2024-11-19 12:37:53.904361] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:48.748 [2024-11-19 12:37:53.904368] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:48.748 [2024-11-19 12:37:53.904372] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:48.748 [2024-11-19 12:37:53.904377] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x52d0c0) on tqpair=0x4e6bd0 00:17:48.748 [2024-11-19 12:37:53.904382] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:17:48.748 [2024-11-19 12:37:53.904388] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:17:48.748 [2024-11-19 12:37:53.904396] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:48.749 [2024-11-19 12:37:53.904502] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:17:48.749 [2024-11-19 12:37:53.904518] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:48.749 [2024-11-19 12:37:53.904529] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:48.749 [2024-11-19 12:37:53.904534] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:48.749 [2024-11-19 12:37:53.904538] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4e6bd0) 00:17:48.749 [2024-11-19 12:37:53.904546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.749 [2024-11-19 12:37:53.904566] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x52d0c0, cid 0, qid 0 00:17:48.749 [2024-11-19 12:37:53.904613] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:48.749 [2024-11-19 12:37:53.904620] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:48.749 [2024-11-19 12:37:53.904624] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:48.749 [2024-11-19 12:37:53.904629] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x52d0c0) on tqpair=0x4e6bd0 00:17:48.749 [2024-11-19 12:37:53.904634] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:48.749 [2024-11-19 12:37:53.904657] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:48.749 [2024-11-19 12:37:53.904662] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:48.749 [2024-11-19 12:37:53.904681] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4e6bd0) 00:17:48.749 [2024-11-19 12:37:53.904691] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.749 [2024-11-19 12:37:53.904710] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x52d0c0, cid 0, qid 0 00:17:48.749 [2024-11-19 12:37:53.904768] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:48.749 [2024-11-19 12:37:53.904776] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:48.749 [2024-11-19 12:37:53.904780] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:48.749 [2024-11-19 12:37:53.904784] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x52d0c0) on tqpair=0x4e6bd0 00:17:48.749 [2024-11-19 12:37:53.904790] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:48.749 [2024-11-19 12:37:53.904795] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:17:48.749 [2024-11-19 12:37:53.904804] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:17:48.749 [2024-11-19 12:37:53.904820] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:17:48.749 [2024-11-19 12:37:53.904831] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:48.749 [2024-11-19 12:37:53.904836] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4e6bd0) 00:17:48.749 [2024-11-19 12:37:53.904844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.749 [2024-11-19 12:37:53.904865] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x52d0c0, cid 0, qid 0 00:17:48.749 [2024-11-19 12:37:53.904941] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:48.749 [2024-11-19 12:37:53.904948] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:48.749 [2024-11-19 12:37:53.904953] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:48.749 [2024-11-19 12:37:53.904957] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4e6bd0): datao=0, datal=4096, cccid=0 00:17:48.749 [2024-11-19 12:37:53.904962] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x52d0c0) on tqpair(0x4e6bd0): expected_datao=0, payload_size=4096 00:17:48.749 [2024-11-19 12:37:53.904967] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:48.749 [2024-11-19 12:37:53.904975] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:48.749 [2024-11-19 12:37:53.904980] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:48.749 [2024-11-19 12:37:53.904989] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:48.749 [2024-11-19 12:37:53.904995] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:48.749 [2024-11-19 12:37:53.904999] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:48.749 [2024-11-19 12:37:53.905003] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x52d0c0) on tqpair=0x4e6bd0 00:17:48.749 [2024-11-19 12:37:53.905012] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:17:48.749 [2024-11-19 12:37:53.905018] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:17:48.749 [2024-11-19 12:37:53.905023] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:17:48.749 [2024-11-19 12:37:53.905028] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:17:48.749 [2024-11-19 12:37:53.905034] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:17:48.749 [2024-11-19 12:37:53.905039] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:17:48.749 [2024-11-19 12:37:53.905048] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:17:48.749 [2024-11-19 12:37:53.905056] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:48.749 [2024-11-19 12:37:53.905061] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:48.749 [2024-11-19 12:37:53.905065] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4e6bd0) 00:17:48.749 [2024-11-19 12:37:53.905073] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:48.749 [2024-11-19 12:37:53.905092] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x52d0c0, cid 0, qid 0 00:17:48.749 [2024-11-19 12:37:53.905149] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:48.749 [2024-11-19 12:37:53.905157] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:48.749 [2024-11-19 12:37:53.905161] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:48.749 [2024-11-19 12:37:53.905165] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x52d0c0) on tqpair=0x4e6bd0 00:17:48.749 [2024-11-19 12:37:53.905173] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:48.749 [2024-11-19 12:37:53.905178] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:48.749 [2024-11-19 12:37:53.905182] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4e6bd0) 00:17:48.749 [2024-11-19 12:37:53.905189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:48.749 [2024-11-19 12:37:53.905196] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:48.749 [2024-11-19 12:37:53.905200] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:48.749 [2024-11-19 12:37:53.905204] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x4e6bd0) 00:17:48.749 [2024-11-19 12:37:53.905210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:48.749 [2024-11-19 12:37:53.905217] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:48.749 [2024-11-19 12:37:53.905221] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:48.749 [2024-11-19 12:37:53.905225] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x4e6bd0) 00:17:48.749 [2024-11-19 12:37:53.905231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:48.749 [2024-11-19 12:37:53.905237] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:48.749 [2024-11-19 12:37:53.905242] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:48.749 [2024-11-19 12:37:53.905246] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e6bd0) 00:17:48.749 [2024-11-19 12:37:53.905252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:48.749 [2024-11-19 12:37:53.905257] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:17:48.749 [2024-11-19 12:37:53.905271] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:48.749 [2024-11-19 12:37:53.905280] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:48.749 [2024-11-19 12:37:53.905284] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4e6bd0) 00:17:48.749 [2024-11-19 12:37:53.905292] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.749 [2024-11-19 12:37:53.905312] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x52d0c0, cid 0, qid 0 00:17:48.749 [2024-11-19 12:37:53.905319] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x52d240, cid 1, qid 0 00:17:48.749 [2024-11-19 12:37:53.905325] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x52d3c0, cid 2, qid 0 00:17:48.749 [2024-11-19 12:37:53.905330] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x52d540, cid 3, qid 0 00:17:48.749 [2024-11-19 12:37:53.905335] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x52d6c0, cid 4, qid 0 00:17:48.749 [2024-11-19 12:37:53.905420] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:48.749 [2024-11-19 12:37:53.905428] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:48.749 [2024-11-19 12:37:53.905431] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:48.749 [2024-11-19 12:37:53.905436] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x52d6c0) on tqpair=0x4e6bd0 00:17:48.749 [2024-11-19 12:37:53.905442] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:17:48.749 [2024-11-19 12:37:53.905448] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:17:48.749 [2024-11-19 12:37:53.905459] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:48.749 [2024-11-19 12:37:53.905464] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4e6bd0) 00:17:48.749 [2024-11-19 12:37:53.905473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.749 [2024-11-19 12:37:53.905491] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x52d6c0, cid 4, qid 0 00:17:48.749 [2024-11-19 12:37:53.905554] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:48.749 [2024-11-19 12:37:53.905562] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:48.749 [2024-11-19 12:37:53.905566] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:48.749 [2024-11-19 12:37:53.905570] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4e6bd0): datao=0, datal=4096, cccid=4 00:17:48.750 [2024-11-19 12:37:53.905575] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x52d6c0) on tqpair(0x4e6bd0): expected_datao=0, payload_size=4096 00:17:48.750 [2024-11-19 12:37:53.905580] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:48.750 [2024-11-19 12:37:53.905588] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:48.750 [2024-11-19 12:37:53.905592] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:48.750 [2024-11-19 12:37:53.905601] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:48.750 [2024-11-19 12:37:53.905607] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:48.750 [2024-11-19 12:37:53.905611] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:48.750 [2024-11-19 12:37:53.905615] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x52d6c0) on tqpair=0x4e6bd0 00:17:48.750 [2024-11-19 12:37:53.905629] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:17:48.750 [2024-11-19 12:37:53.905656] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:48.750 [2024-11-19 12:37:53.905662] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4e6bd0) 00:17:48.750 [2024-11-19 12:37:53.905705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.750 [2024-11-19 12:37:53.905732] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:48.750 [2024-11-19 12:37:53.905737] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:48.750 [2024-11-19 12:37:53.905742] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x4e6bd0) 00:17:48.750 [2024-11-19 12:37:53.905749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:48.750 [2024-11-19 12:37:53.905774] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x52d6c0, cid 4, qid 0 00:17:48.750 [2024-11-19 12:37:53.905782] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x52d840, cid 5, qid 0 00:17:48.750 [2024-11-19 12:37:53.905876] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:48.750 [2024-11-19 12:37:53.905884] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:48.750 [2024-11-19 12:37:53.905888] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:48.750 [2024-11-19 12:37:53.905893] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4e6bd0): datao=0, datal=1024, cccid=4 00:17:48.750 [2024-11-19 12:37:53.905898] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x52d6c0) on tqpair(0x4e6bd0): expected_datao=0, payload_size=1024 00:17:48.750 [2024-11-19 12:37:53.905903] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:48.750 [2024-11-19 12:37:53.905911] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:48.750 [2024-11-19 12:37:53.905915] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:48.750 [2024-11-19 12:37:53.905922] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:48.750 [2024-11-19 12:37:53.905929] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:48.750 [2024-11-19 12:37:53.905933] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:48.750 [2024-11-19 12:37:53.905938] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x52d840) on tqpair=0x4e6bd0 00:17:48.750 [2024-11-19 12:37:53.905957] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:48.750 [2024-11-19 12:37:53.905965] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:48.750 [2024-11-19 12:37:53.905969] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:48.750 [2024-11-19 12:37:53.905974] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x52d6c0) on tqpair=0x4e6bd0 00:17:48.750 [2024-11-19 12:37:53.905986] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:48.750 [2024-11-19 12:37:53.905992] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4e6bd0) 00:17:48.750 [2024-11-19 12:37:53.906000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.750 [2024-11-19 12:37:53.906026] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x52d6c0, cid 4, qid 0 00:17:48.750 [2024-11-19 12:37:53.906129] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:48.750 [2024-11-19 12:37:53.906142] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:48.750 [2024-11-19 12:37:53.906147] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:48.750 [2024-11-19 12:37:53.906151] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4e6bd0): datao=0, datal=3072, cccid=4 00:17:48.750 [2024-11-19 12:37:53.906157] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x52d6c0) on tqpair(0x4e6bd0): expected_datao=0, payload_size=3072 00:17:48.750 [2024-11-19 12:37:53.906162] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:48.750 [2024-11-19 12:37:53.906169] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:48.750 [2024-11-19 12:37:53.906173] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:48.750 [2024-11-19 12:37:53.906182] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:48.750 [2024-11-19 12:37:53.906188] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:48.750 [2024-11-19 12:37:53.906192] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:48.750 [2024-11-19 12:37:53.906197] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x52d6c0) on tqpair=0x4e6bd0 00:17:48.750 [2024-11-19 12:37:53.906207] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:48.750 [2024-11-19 12:37:53.906212] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4e6bd0) 00:17:48.750 [2024-11-19 12:37:53.906219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.750 [2024-11-19 12:37:53.906243] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x52d6c0, cid 4, qid 0 00:17:48.750 [2024-11-19 12:37:53.906302] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:48.750 [2024-11-19 12:37:53.906309] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:48.750 [2024-11-19 12:37:53.906314] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:48.750 [2024-11-19 12:37:53.906318] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4e6bd0): datao=0, datal=8, cccid=4 00:17:48.750 [2024-11-19 12:37:53.906323] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x52d6c0) on tqpair(0x4e6bd0): expected_datao=0, payload_size=8 00:17:48.750 [2024-11-19 12:37:53.906327] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:48.750 [2024-11-19 12:37:53.906334] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:48.750 [2024-11-19 12:37:53.906339] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:48.750 [2024-11-19 12:37:53.906354] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:48.750 [2024-11-19 12:37:53.906361] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:48.750 [2024-11-19 12:37:53.906366] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:48.750 [2024-11-19 12:37:53.906370] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x52d6c0) on tqpair=0x4e6bd0 00:17:48.750 ===================================================== 00:17:48.750 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:48.750 ===================================================== 00:17:48.750 Controller Capabilities/Features 00:17:48.750 ================================ 00:17:48.750 Vendor ID: 0000 00:17:48.750 Subsystem Vendor ID: 0000 00:17:48.750 Serial Number: .................... 00:17:48.750 Model Number: ........................................ 00:17:48.750 Firmware Version: 24.09.1 00:17:48.750 Recommended Arb Burst: 0 00:17:48.750 IEEE OUI Identifier: 00 00 00 00:17:48.750 Multi-path I/O 00:17:48.750 May have multiple subsystem ports: No 00:17:48.750 May have multiple controllers: No 00:17:48.750 Associated with SR-IOV VF: No 00:17:48.750 Max Data Transfer Size: 131072 00:17:48.750 Max Number of Namespaces: 0 00:17:48.750 Max Number of I/O Queues: 1024 00:17:48.750 NVMe Specification Version (VS): 1.3 00:17:48.750 NVMe Specification Version (Identify): 1.3 00:17:48.750 Maximum Queue Entries: 128 00:17:48.750 Contiguous Queues Required: Yes 00:17:48.750 Arbitration Mechanisms Supported 00:17:48.750 Weighted Round Robin: Not Supported 00:17:48.750 Vendor Specific: Not Supported 00:17:48.750 Reset Timeout: 15000 ms 00:17:48.750 Doorbell Stride: 4 bytes 00:17:48.750 NVM Subsystem Reset: Not Supported 00:17:48.750 Command Sets Supported 00:17:48.750 NVM Command Set: Supported 00:17:48.750 Boot Partition: Not Supported 00:17:48.750 Memory Page Size Minimum: 4096 bytes 00:17:48.750 Memory Page Size Maximum: 4096 bytes 00:17:48.750 Persistent Memory Region: Not Supported 00:17:48.750 Optional Asynchronous Events Supported 00:17:48.750 Namespace Attribute Notices: Not Supported 00:17:48.750 Firmware Activation Notices: Not Supported 00:17:48.750 ANA Change Notices: Not Supported 00:17:48.750 PLE Aggregate Log Change Notices: Not Supported 00:17:48.750 LBA Status Info Alert Notices: Not Supported 00:17:48.750 EGE Aggregate Log Change Notices: Not Supported 00:17:48.750 Normal NVM Subsystem Shutdown event: Not Supported 00:17:48.750 Zone Descriptor Change Notices: Not Supported 00:17:48.750 Discovery Log Change Notices: Supported 00:17:48.750 Controller Attributes 00:17:48.750 128-bit Host Identifier: Not Supported 00:17:48.750 Non-Operational Permissive Mode: Not Supported 00:17:48.750 NVM Sets: Not Supported 00:17:48.750 Read Recovery Levels: Not Supported 00:17:48.750 Endurance Groups: Not Supported 00:17:48.750 Predictable Latency Mode: Not Supported 00:17:48.750 Traffic Based Keep ALive: Not Supported 00:17:48.750 Namespace Granularity: Not Supported 00:17:48.751 SQ Associations: Not Supported 00:17:48.751 UUID List: Not Supported 00:17:48.751 Multi-Domain Subsystem: Not Supported 00:17:48.751 Fixed Capacity Management: Not Supported 00:17:48.751 Variable Capacity Management: Not Supported 00:17:48.751 Delete Endurance Group: Not Supported 00:17:48.751 Delete NVM Set: Not Supported 00:17:48.751 Extended LBA Formats Supported: Not Supported 00:17:48.751 Flexible Data Placement Supported: Not Supported 00:17:48.751 00:17:48.751 Controller Memory Buffer Support 00:17:48.751 ================================ 00:17:48.751 Supported: No 00:17:48.751 00:17:48.751 Persistent Memory Region Support 00:17:48.751 ================================ 00:17:48.751 Supported: No 00:17:48.751 00:17:48.751 Admin Command Set Attributes 00:17:48.751 ============================ 00:17:48.751 Security Send/Receive: Not Supported 00:17:48.751 Format NVM: Not Supported 00:17:48.751 Firmware Activate/Download: Not Supported 00:17:48.751 Namespace Management: Not Supported 00:17:48.751 Device Self-Test: Not Supported 00:17:48.751 Directives: Not Supported 00:17:48.751 NVMe-MI: Not Supported 00:17:48.751 Virtualization Management: Not Supported 00:17:48.751 Doorbell Buffer Config: Not Supported 00:17:48.751 Get LBA Status Capability: Not Supported 00:17:48.751 Command & Feature Lockdown Capability: Not Supported 00:17:48.751 Abort Command Limit: 1 00:17:48.751 Async Event Request Limit: 4 00:17:48.751 Number of Firmware Slots: N/A 00:17:48.751 Firmware Slot 1 Read-Only: N/A 00:17:48.751 Firmware Activation Without Reset: N/A 00:17:48.751 Multiple Update Detection Support: N/A 00:17:48.751 Firmware Update Granularity: No Information Provided 00:17:48.751 Per-Namespace SMART Log: No 00:17:48.751 Asymmetric Namespace Access Log Page: Not Supported 00:17:48.751 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:48.751 Command Effects Log Page: Not Supported 00:17:48.751 Get Log Page Extended Data: Supported 00:17:48.751 Telemetry Log Pages: Not Supported 00:17:48.751 Persistent Event Log Pages: Not Supported 00:17:48.751 Supported Log Pages Log Page: May Support 00:17:48.751 Commands Supported & Effects Log Page: Not Supported 00:17:48.751 Feature Identifiers & Effects Log Page:May Support 00:17:48.751 NVMe-MI Commands & Effects Log Page: May Support 00:17:48.751 Data Area 4 for Telemetry Log: Not Supported 00:17:48.751 Error Log Page Entries Supported: 128 00:17:48.751 Keep Alive: Not Supported 00:17:48.751 00:17:48.751 NVM Command Set Attributes 00:17:48.751 ========================== 00:17:48.751 Submission Queue Entry Size 00:17:48.751 Max: 1 00:17:48.751 Min: 1 00:17:48.751 Completion Queue Entry Size 00:17:48.751 Max: 1 00:17:48.751 Min: 1 00:17:48.751 Number of Namespaces: 0 00:17:48.751 Compare Command: Not Supported 00:17:48.751 Write Uncorrectable Command: Not Supported 00:17:48.751 Dataset Management Command: Not Supported 00:17:48.751 Write Zeroes Command: Not Supported 00:17:48.751 Set Features Save Field: Not Supported 00:17:48.751 Reservations: Not Supported 00:17:48.751 Timestamp: Not Supported 00:17:48.751 Copy: Not Supported 00:17:48.751 Volatile Write Cache: Not Present 00:17:48.751 Atomic Write Unit (Normal): 1 00:17:48.751 Atomic Write Unit (PFail): 1 00:17:48.751 Atomic Compare & Write Unit: 1 00:17:48.751 Fused Compare & Write: Supported 00:17:48.751 Scatter-Gather List 00:17:48.751 SGL Command Set: Supported 00:17:48.751 SGL Keyed: Supported 00:17:48.751 SGL Bit Bucket Descriptor: Not Supported 00:17:48.751 SGL Metadata Pointer: Not Supported 00:17:48.751 Oversized SGL: Not Supported 00:17:48.751 SGL Metadata Address: Not Supported 00:17:48.751 SGL Offset: Supported 00:17:48.751 Transport SGL Data Block: Not Supported 00:17:48.751 Replay Protected Memory Block: Not Supported 00:17:48.751 00:17:48.751 Firmware Slot Information 00:17:48.751 ========================= 00:17:48.751 Active slot: 0 00:17:48.751 00:17:48.751 00:17:48.751 Error Log 00:17:48.751 ========= 00:17:48.751 00:17:48.751 Active Namespaces 00:17:48.751 ================= 00:17:48.751 Discovery Log Page 00:17:48.751 ================== 00:17:48.751 Generation Counter: 2 00:17:48.751 Number of Records: 2 00:17:48.751 Record Format: 0 00:17:48.751 00:17:48.751 Discovery Log Entry 0 00:17:48.751 ---------------------- 00:17:48.751 Transport Type: 3 (TCP) 00:17:48.751 Address Family: 1 (IPv4) 00:17:48.751 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:48.751 Entry Flags: 00:17:48.751 Duplicate Returned Information: 1 00:17:48.751 Explicit Persistent Connection Support for Discovery: 1 00:17:48.751 Transport Requirements: 00:17:48.751 Secure Channel: Not Required 00:17:48.751 Port ID: 0 (0x0000) 00:17:48.751 Controller ID: 65535 (0xffff) 00:17:48.751 Admin Max SQ Size: 128 00:17:48.751 Transport Service Identifier: 4420 00:17:48.751 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:48.751 Transport Address: 10.0.0.3 00:17:48.751 Discovery Log Entry 1 00:17:48.751 ---------------------- 00:17:48.751 Transport Type: 3 (TCP) 00:17:48.751 Address Family: 1 (IPv4) 00:17:48.751 Subsystem Type: 2 (NVM Subsystem) 00:17:48.751 Entry Flags: 00:17:48.751 Duplicate Returned Information: 0 00:17:48.751 Explicit Persistent Connection Support for Discovery: 0 00:17:48.751 Transport Requirements: 00:17:48.751 Secure Channel: Not Required 00:17:48.751 Port ID: 0 (0x0000) 00:17:48.751 Controller ID: 65535 (0xffff) 00:17:48.751 Admin Max SQ Size: 128 00:17:48.751 Transport Service Identifier: 4420 00:17:48.751 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:17:48.751 Transport Address: 10.0.0.3 [2024-11-19 12:37:53.906484] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:17:48.751 [2024-11-19 12:37:53.906501] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x52d0c0) on tqpair=0x4e6bd0 00:17:48.751 [2024-11-19 12:37:53.906509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:48.751 [2024-11-19 12:37:53.906515] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x52d240) on tqpair=0x4e6bd0 00:17:48.751 [2024-11-19 12:37:53.906521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:48.751 [2024-11-19 12:37:53.906526] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x52d3c0) on tqpair=0x4e6bd0 00:17:48.751 [2024-11-19 12:37:53.906531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:48.751 [2024-11-19 12:37:53.906536] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x52d540) on tqpair=0x4e6bd0 00:17:48.751 [2024-11-19 12:37:53.906541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:48.751 [2024-11-19 12:37:53.906551] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:48.751 [2024-11-19 12:37:53.906556] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:48.751 [2024-11-19 12:37:53.906560] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e6bd0) 00:17:48.751 [2024-11-19 12:37:53.906569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.751 [2024-11-19 12:37:53.906595] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x52d540, cid 3, qid 0 00:17:48.751 [2024-11-19 12:37:53.906653] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:48.751 [2024-11-19 12:37:53.906661] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:48.751 [2024-11-19 12:37:53.906695] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:48.751 [2024-11-19 12:37:53.906700] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x52d540) on tqpair=0x4e6bd0 00:17:48.751 [2024-11-19 12:37:53.906726] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:48.751 [2024-11-19 12:37:53.906731] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:48.751 [2024-11-19 12:37:53.906735] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e6bd0) 00:17:48.751 [2024-11-19 12:37:53.906743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.751 [2024-11-19 12:37:53.906770] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x52d540, cid 3, qid 0 00:17:48.751 [2024-11-19 12:37:53.906840] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:48.751 [2024-11-19 12:37:53.906847] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:48.751 [2024-11-19 12:37:53.906851] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:48.751 [2024-11-19 12:37:53.906856] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x52d540) on tqpair=0x4e6bd0 00:17:48.751 [2024-11-19 12:37:53.906862] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:17:48.751 [2024-11-19 12:37:53.906872] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:17:48.751 [2024-11-19 12:37:53.906884] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:48.752 [2024-11-19 12:37:53.906890] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:48.752 [2024-11-19 12:37:53.906894] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e6bd0) 00:17:48.752 [2024-11-19 12:37:53.906902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.752 [2024-11-19 12:37:53.906925] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x52d540, cid 3, qid 0 00:17:48.752 [2024-11-19 12:37:53.906978] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:48.752 [2024-11-19 12:37:53.906986] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:48.752 [2024-11-19 12:37:53.906990] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:48.752 [2024-11-19 12:37:53.906995] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x52d540) on tqpair=0x4e6bd0 00:17:48.752 [2024-11-19 12:37:53.907007] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:48.752 [2024-11-19 12:37:53.907013] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:48.752 [2024-11-19 12:37:53.907017] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e6bd0) 00:17:48.752 [2024-11-19 12:37:53.907025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.752 [2024-11-19 12:37:53.907043] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x52d540, cid 3, qid 0 00:17:48.752 [2024-11-19 12:37:53.907088] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:48.752 [2024-11-19 12:37:53.907095] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:48.752 [2024-11-19 12:37:53.907099] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:48.752 [2024-11-19 12:37:53.907104] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x52d540) on tqpair=0x4e6bd0 00:17:48.752 [2024-11-19 12:37:53.907115] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:48.752 [2024-11-19 12:37:53.907121] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:48.752 [2024-11-19 12:37:53.907125] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e6bd0) 00:17:48.752 [2024-11-19 12:37:53.907144] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.752 [2024-11-19 12:37:53.907164] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x52d540, cid 3, qid 0 00:17:48.752 [2024-11-19 12:37:53.907216] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:48.752 [2024-11-19 12:37:53.907223] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:48.752 [2024-11-19 12:37:53.907228] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:48.752 [2024-11-19 12:37:53.907232] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x52d540) on tqpair=0x4e6bd0 00:17:48.752 [2024-11-19 12:37:53.907244] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:48.752 [2024-11-19 12:37:53.907249] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:48.752 [2024-11-19 12:37:53.907253] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e6bd0) 00:17:48.752 [2024-11-19 12:37:53.907261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.752 [2024-11-19 12:37:53.907280] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x52d540, cid 3, qid 0 00:17:48.752 [2024-11-19 12:37:53.907323] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:48.752 [2024-11-19 12:37:53.907331] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:48.752 [2024-11-19 12:37:53.907335] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:48.752 [2024-11-19 12:37:53.907340] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x52d540) on tqpair=0x4e6bd0 00:17:48.752 [2024-11-19 12:37:53.907351] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:48.752 [2024-11-19 12:37:53.907356] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:48.752 [2024-11-19 12:37:53.907361] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e6bd0) 00:17:48.752 [2024-11-19 12:37:53.907369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.752 [2024-11-19 12:37:53.907387] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x52d540, cid 3, qid 0 00:17:48.752 [2024-11-19 12:37:53.907433] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:48.752 [2024-11-19 12:37:53.907441] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:48.752 [2024-11-19 12:37:53.907445] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:48.752 [2024-11-19 12:37:53.907450] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x52d540) on tqpair=0x4e6bd0 00:17:48.752 [2024-11-19 12:37:53.907461] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:48.752 [2024-11-19 12:37:53.907481] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:48.752 [2024-11-19 12:37:53.907485] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e6bd0) 00:17:48.752 [2024-11-19 12:37:53.907493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.752 [2024-11-19 12:37:53.907511] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x52d540, cid 3, qid 0 00:17:48.752 [2024-11-19 12:37:53.907569] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:48.752 [2024-11-19 12:37:53.907576] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:48.752 [2024-11-19 12:37:53.907580] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:48.752 [2024-11-19 12:37:53.907585] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x52d540) on tqpair=0x4e6bd0 00:17:48.752 [2024-11-19 12:37:53.907595] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:48.752 [2024-11-19 12:37:53.907600] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:48.752 [2024-11-19 12:37:53.907604] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e6bd0) 00:17:48.752 [2024-11-19 12:37:53.907612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.752 [2024-11-19 12:37:53.907629] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x52d540, cid 3, qid 0 00:17:48.752 [2024-11-19 12:37:53.907695] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:48.752 [2024-11-19 12:37:53.907702] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:48.752 [2024-11-19 12:37:53.907706] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:48.752 [2024-11-19 12:37:53.911762] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x52d540) on tqpair=0x4e6bd0 00:17:48.752 [2024-11-19 12:37:53.911797] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:48.752 [2024-11-19 12:37:53.911803] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:48.752 [2024-11-19 12:37:53.911807] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4e6bd0) 00:17:48.752 [2024-11-19 12:37:53.911817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.752 [2024-11-19 12:37:53.911844] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x52d540, cid 3, qid 0 00:17:48.752 [2024-11-19 12:37:53.911891] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:48.752 [2024-11-19 12:37:53.911899] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:48.752 [2024-11-19 12:37:53.911903] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:48.752 [2024-11-19 12:37:53.911908] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x52d540) on tqpair=0x4e6bd0 00:17:48.752 [2024-11-19 12:37:53.911916] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:17:48.752 00:17:48.752 12:37:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:17:48.752 [2024-11-19 12:37:53.950651] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:48.752 [2024-11-19 12:37:53.950719] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88909 ] 00:17:49.018 [2024-11-19 12:37:54.088876] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:17:49.018 [2024-11-19 12:37:54.088949] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:49.018 [2024-11-19 12:37:54.088956] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:49.018 [2024-11-19 12:37:54.088967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:49.018 [2024-11-19 12:37:54.088975] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:49.018 [2024-11-19 12:37:54.089259] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:17:49.018 [2024-11-19 12:37:54.089320] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x522bd0 0 00:17:49.018 [2024-11-19 12:37:54.094706] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:49.018 [2024-11-19 12:37:54.094745] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:49.018 [2024-11-19 12:37:54.094767] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:49.018 [2024-11-19 12:37:54.094771] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:49.018 [2024-11-19 12:37:54.094804] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.018 [2024-11-19 12:37:54.094811] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.018 [2024-11-19 12:37:54.094815] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x522bd0) 00:17:49.018 [2024-11-19 12:37:54.094826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:49.018 [2024-11-19 12:37:54.094863] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5690c0, cid 0, qid 0 00:17:49.018 [2024-11-19 12:37:54.102701] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.018 [2024-11-19 12:37:54.102723] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.018 [2024-11-19 12:37:54.102744] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.018 [2024-11-19 12:37:54.102749] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5690c0) on tqpair=0x522bd0 00:17:49.018 [2024-11-19 12:37:54.102760] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:49.018 [2024-11-19 12:37:54.102767] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:17:49.018 [2024-11-19 12:37:54.102774] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:17:49.018 [2024-11-19 12:37:54.102791] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.018 [2024-11-19 12:37:54.102796] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.018 [2024-11-19 12:37:54.102800] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x522bd0) 00:17:49.018 [2024-11-19 12:37:54.102810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.018 [2024-11-19 12:37:54.102837] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5690c0, cid 0, qid 0 00:17:49.018 [2024-11-19 12:37:54.102893] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.018 [2024-11-19 12:37:54.102900] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.018 [2024-11-19 12:37:54.102904] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.018 [2024-11-19 12:37:54.102908] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5690c0) on tqpair=0x522bd0 00:17:49.018 [2024-11-19 12:37:54.102914] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:17:49.018 [2024-11-19 12:37:54.102922] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:17:49.018 [2024-11-19 12:37:54.102929] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.018 [2024-11-19 12:37:54.102934] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.018 [2024-11-19 12:37:54.102937] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x522bd0) 00:17:49.019 [2024-11-19 12:37:54.102945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.019 [2024-11-19 12:37:54.102980] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5690c0, cid 0, qid 0 00:17:49.019 [2024-11-19 12:37:54.103028] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.019 [2024-11-19 12:37:54.103035] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.019 [2024-11-19 12:37:54.103039] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.019 [2024-11-19 12:37:54.103043] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5690c0) on tqpair=0x522bd0 00:17:49.019 [2024-11-19 12:37:54.103065] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:17:49.019 [2024-11-19 12:37:54.103073] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:17:49.019 [2024-11-19 12:37:54.103081] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.019 [2024-11-19 12:37:54.103085] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.019 [2024-11-19 12:37:54.103089] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x522bd0) 00:17:49.019 [2024-11-19 12:37:54.103096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.019 [2024-11-19 12:37:54.103113] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5690c0, cid 0, qid 0 00:17:49.019 [2024-11-19 12:37:54.103203] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.019 [2024-11-19 12:37:54.103212] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.019 [2024-11-19 12:37:54.103216] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.019 [2024-11-19 12:37:54.103220] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5690c0) on tqpair=0x522bd0 00:17:49.019 [2024-11-19 12:37:54.103226] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:49.019 [2024-11-19 12:37:54.103237] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.019 [2024-11-19 12:37:54.103242] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.019 [2024-11-19 12:37:54.103246] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x522bd0) 00:17:49.019 [2024-11-19 12:37:54.103254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.019 [2024-11-19 12:37:54.103274] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5690c0, cid 0, qid 0 00:17:49.019 [2024-11-19 12:37:54.103318] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.019 [2024-11-19 12:37:54.103325] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.019 [2024-11-19 12:37:54.103329] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.019 [2024-11-19 12:37:54.103333] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5690c0) on tqpair=0x522bd0 00:17:49.019 [2024-11-19 12:37:54.103338] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:17:49.019 [2024-11-19 12:37:54.103344] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:17:49.019 [2024-11-19 12:37:54.103352] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:49.019 [2024-11-19 12:37:54.103458] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:17:49.019 [2024-11-19 12:37:54.103463] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:49.019 [2024-11-19 12:37:54.103486] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.019 [2024-11-19 12:37:54.103491] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.019 [2024-11-19 12:37:54.103495] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x522bd0) 00:17:49.019 [2024-11-19 12:37:54.103508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.019 [2024-11-19 12:37:54.103526] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5690c0, cid 0, qid 0 00:17:49.019 [2024-11-19 12:37:54.103591] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.019 [2024-11-19 12:37:54.103597] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.019 [2024-11-19 12:37:54.103601] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.019 [2024-11-19 12:37:54.103605] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5690c0) on tqpair=0x522bd0 00:17:49.019 [2024-11-19 12:37:54.103611] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:49.019 [2024-11-19 12:37:54.103620] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.019 [2024-11-19 12:37:54.103625] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.019 [2024-11-19 12:37:54.103629] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x522bd0) 00:17:49.019 [2024-11-19 12:37:54.103636] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.019 [2024-11-19 12:37:54.103652] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5690c0, cid 0, qid 0 00:17:49.019 [2024-11-19 12:37:54.103719] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.019 [2024-11-19 12:37:54.103726] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.019 [2024-11-19 12:37:54.103730] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.019 [2024-11-19 12:37:54.103734] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5690c0) on tqpair=0x522bd0 00:17:49.019 [2024-11-19 12:37:54.103753] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:49.019 [2024-11-19 12:37:54.103759] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:17:49.019 [2024-11-19 12:37:54.103768] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:17:49.019 [2024-11-19 12:37:54.103785] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:17:49.019 [2024-11-19 12:37:54.103795] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.019 [2024-11-19 12:37:54.103799] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x522bd0) 00:17:49.019 [2024-11-19 12:37:54.103808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.019 [2024-11-19 12:37:54.103829] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5690c0, cid 0, qid 0 00:17:49.019 [2024-11-19 12:37:54.103915] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:49.019 [2024-11-19 12:37:54.103923] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:49.019 [2024-11-19 12:37:54.103927] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:49.019 [2024-11-19 12:37:54.103931] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x522bd0): datao=0, datal=4096, cccid=0 00:17:49.019 [2024-11-19 12:37:54.103936] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5690c0) on tqpair(0x522bd0): expected_datao=0, payload_size=4096 00:17:49.019 [2024-11-19 12:37:54.103941] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.019 [2024-11-19 12:37:54.103949] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:49.019 [2024-11-19 12:37:54.103953] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:49.019 [2024-11-19 12:37:54.103962] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.019 [2024-11-19 12:37:54.103969] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.019 [2024-11-19 12:37:54.103972] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.019 [2024-11-19 12:37:54.103977] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5690c0) on tqpair=0x522bd0 00:17:49.019 [2024-11-19 12:37:54.103985] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:17:49.019 [2024-11-19 12:37:54.103991] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:17:49.019 [2024-11-19 12:37:54.103996] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:17:49.019 [2024-11-19 12:37:54.104000] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:17:49.019 [2024-11-19 12:37:54.104005] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:17:49.019 [2024-11-19 12:37:54.104010] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:17:49.019 [2024-11-19 12:37:54.104020] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:17:49.019 [2024-11-19 12:37:54.104027] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.019 [2024-11-19 12:37:54.104032] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.019 [2024-11-19 12:37:54.104036] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x522bd0) 00:17:49.019 [2024-11-19 12:37:54.104044] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:49.019 [2024-11-19 12:37:54.104064] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5690c0, cid 0, qid 0 00:17:49.019 [2024-11-19 12:37:54.104132] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.019 [2024-11-19 12:37:54.104139] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.019 [2024-11-19 12:37:54.104143] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.019 [2024-11-19 12:37:54.104147] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5690c0) on tqpair=0x522bd0 00:17:49.019 [2024-11-19 12:37:54.104155] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.019 [2024-11-19 12:37:54.104159] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.019 [2024-11-19 12:37:54.104163] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x522bd0) 00:17:49.019 [2024-11-19 12:37:54.104170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.019 [2024-11-19 12:37:54.104176] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.019 [2024-11-19 12:37:54.104180] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.019 [2024-11-19 12:37:54.104184] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x522bd0) 00:17:49.019 [2024-11-19 12:37:54.104190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.019 [2024-11-19 12:37:54.104196] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.019 [2024-11-19 12:37:54.104200] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.019 [2024-11-19 12:37:54.104204] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x522bd0) 00:17:49.019 [2024-11-19 12:37:54.104210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.020 [2024-11-19 12:37:54.104216] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.020 [2024-11-19 12:37:54.104220] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.020 [2024-11-19 12:37:54.104223] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x522bd0) 00:17:49.020 [2024-11-19 12:37:54.104229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.020 [2024-11-19 12:37:54.104234] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:49.020 [2024-11-19 12:37:54.104248] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:49.020 [2024-11-19 12:37:54.104255] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.020 [2024-11-19 12:37:54.104259] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x522bd0) 00:17:49.020 [2024-11-19 12:37:54.104266] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.020 [2024-11-19 12:37:54.104287] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5690c0, cid 0, qid 0 00:17:49.020 [2024-11-19 12:37:54.104293] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x569240, cid 1, qid 0 00:17:49.020 [2024-11-19 12:37:54.104299] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5693c0, cid 2, qid 0 00:17:49.020 [2024-11-19 12:37:54.104304] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x569540, cid 3, qid 0 00:17:49.020 [2024-11-19 12:37:54.104308] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5696c0, cid 4, qid 0 00:17:49.020 [2024-11-19 12:37:54.104394] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.020 [2024-11-19 12:37:54.104400] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.020 [2024-11-19 12:37:54.104404] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.020 [2024-11-19 12:37:54.104408] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5696c0) on tqpair=0x522bd0 00:17:49.020 [2024-11-19 12:37:54.104414] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:17:49.020 [2024-11-19 12:37:54.104420] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:49.020 [2024-11-19 12:37:54.104428] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:17:49.020 [2024-11-19 12:37:54.104438] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:49.020 [2024-11-19 12:37:54.104446] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.020 [2024-11-19 12:37:54.104450] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.020 [2024-11-19 12:37:54.104454] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x522bd0) 00:17:49.020 [2024-11-19 12:37:54.104462] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:49.020 [2024-11-19 12:37:54.104480] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5696c0, cid 4, qid 0 00:17:49.020 [2024-11-19 12:37:54.104525] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.020 [2024-11-19 12:37:54.104532] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.020 [2024-11-19 12:37:54.104535] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.020 [2024-11-19 12:37:54.104540] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5696c0) on tqpair=0x522bd0 00:17:49.020 [2024-11-19 12:37:54.104603] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:17:49.020 [2024-11-19 12:37:54.104614] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:49.020 [2024-11-19 12:37:54.104622] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.020 [2024-11-19 12:37:54.104626] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x522bd0) 00:17:49.020 [2024-11-19 12:37:54.104634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.020 [2024-11-19 12:37:54.104653] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5696c0, cid 4, qid 0 00:17:49.020 [2024-11-19 12:37:54.104729] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:49.020 [2024-11-19 12:37:54.104737] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:49.020 [2024-11-19 12:37:54.104741] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:49.020 [2024-11-19 12:37:54.104745] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x522bd0): datao=0, datal=4096, cccid=4 00:17:49.020 [2024-11-19 12:37:54.104750] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5696c0) on tqpair(0x522bd0): expected_datao=0, payload_size=4096 00:17:49.020 [2024-11-19 12:37:54.104754] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.020 [2024-11-19 12:37:54.104762] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:49.020 [2024-11-19 12:37:54.104766] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:49.020 [2024-11-19 12:37:54.104774] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.020 [2024-11-19 12:37:54.104780] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.020 [2024-11-19 12:37:54.104784] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.020 [2024-11-19 12:37:54.104788] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5696c0) on tqpair=0x522bd0 00:17:49.020 [2024-11-19 12:37:54.104798] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:17:49.020 [2024-11-19 12:37:54.104810] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:17:49.020 [2024-11-19 12:37:54.104820] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:17:49.020 [2024-11-19 12:37:54.104829] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.020 [2024-11-19 12:37:54.104833] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x522bd0) 00:17:49.020 [2024-11-19 12:37:54.104840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.020 [2024-11-19 12:37:54.104861] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5696c0, cid 4, qid 0 00:17:49.020 [2024-11-19 12:37:54.104935] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:49.020 [2024-11-19 12:37:54.104942] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:49.020 [2024-11-19 12:37:54.104946] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:49.020 [2024-11-19 12:37:54.104950] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x522bd0): datao=0, datal=4096, cccid=4 00:17:49.020 [2024-11-19 12:37:54.104954] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5696c0) on tqpair(0x522bd0): expected_datao=0, payload_size=4096 00:17:49.020 [2024-11-19 12:37:54.104959] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.020 [2024-11-19 12:37:54.104966] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:49.020 [2024-11-19 12:37:54.104970] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:49.020 [2024-11-19 12:37:54.104978] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.020 [2024-11-19 12:37:54.104984] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.020 [2024-11-19 12:37:54.104988] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.020 [2024-11-19 12:37:54.104992] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5696c0) on tqpair=0x522bd0 00:17:49.020 [2024-11-19 12:37:54.105007] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:49.020 [2024-11-19 12:37:54.105018] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:49.020 [2024-11-19 12:37:54.105026] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.020 [2024-11-19 12:37:54.105030] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x522bd0) 00:17:49.020 [2024-11-19 12:37:54.105038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.020 [2024-11-19 12:37:54.105058] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5696c0, cid 4, qid 0 00:17:49.020 [2024-11-19 12:37:54.105115] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:49.020 [2024-11-19 12:37:54.105122] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:49.020 [2024-11-19 12:37:54.105126] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:49.020 [2024-11-19 12:37:54.105130] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x522bd0): datao=0, datal=4096, cccid=4 00:17:49.020 [2024-11-19 12:37:54.105135] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5696c0) on tqpair(0x522bd0): expected_datao=0, payload_size=4096 00:17:49.020 [2024-11-19 12:37:54.105139] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.020 [2024-11-19 12:37:54.105146] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:49.020 [2024-11-19 12:37:54.105150] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:49.020 [2024-11-19 12:37:54.105158] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.020 [2024-11-19 12:37:54.105164] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.020 [2024-11-19 12:37:54.105168] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.020 [2024-11-19 12:37:54.105172] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5696c0) on tqpair=0x522bd0 00:17:49.020 [2024-11-19 12:37:54.105181] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:49.020 [2024-11-19 12:37:54.105189] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:17:49.020 [2024-11-19 12:37:54.105200] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:17:49.020 [2024-11-19 12:37:54.105206] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:49.020 [2024-11-19 12:37:54.105212] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:49.020 [2024-11-19 12:37:54.105217] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:17:49.020 [2024-11-19 12:37:54.105223] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:17:49.020 [2024-11-19 12:37:54.105228] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:17:49.020 [2024-11-19 12:37:54.105233] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:17:49.020 [2024-11-19 12:37:54.105247] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.020 [2024-11-19 12:37:54.105252] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x522bd0) 00:17:49.021 [2024-11-19 12:37:54.105259] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.021 [2024-11-19 12:37:54.105267] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.021 [2024-11-19 12:37:54.105271] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.021 [2024-11-19 12:37:54.105274] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x522bd0) 00:17:49.021 [2024-11-19 12:37:54.105281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:49.021 [2024-11-19 12:37:54.105301] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5696c0, cid 4, qid 0 00:17:49.021 [2024-11-19 12:37:54.105308] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x569840, cid 5, qid 0 00:17:49.021 [2024-11-19 12:37:54.105370] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.021 [2024-11-19 12:37:54.105377] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.021 [2024-11-19 12:37:54.105380] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.021 [2024-11-19 12:37:54.105385] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5696c0) on tqpair=0x522bd0 00:17:49.021 [2024-11-19 12:37:54.105391] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.021 [2024-11-19 12:37:54.105397] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.021 [2024-11-19 12:37:54.105401] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.021 [2024-11-19 12:37:54.105405] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x569840) on tqpair=0x522bd0 00:17:49.021 [2024-11-19 12:37:54.105415] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.021 [2024-11-19 12:37:54.105420] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x522bd0) 00:17:49.021 [2024-11-19 12:37:54.105427] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.021 [2024-11-19 12:37:54.105444] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x569840, cid 5, qid 0 00:17:49.021 [2024-11-19 12:37:54.105489] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.021 [2024-11-19 12:37:54.105496] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.021 [2024-11-19 12:37:54.105500] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.021 [2024-11-19 12:37:54.105504] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x569840) on tqpair=0x522bd0 00:17:49.021 [2024-11-19 12:37:54.105514] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.021 [2024-11-19 12:37:54.105518] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x522bd0) 00:17:49.021 [2024-11-19 12:37:54.105525] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.021 [2024-11-19 12:37:54.105542] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x569840, cid 5, qid 0 00:17:49.021 [2024-11-19 12:37:54.105590] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.021 [2024-11-19 12:37:54.105597] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.021 [2024-11-19 12:37:54.105601] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.021 [2024-11-19 12:37:54.105605] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x569840) on tqpair=0x522bd0 00:17:49.021 [2024-11-19 12:37:54.105615] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.021 [2024-11-19 12:37:54.105619] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x522bd0) 00:17:49.021 [2024-11-19 12:37:54.105626] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.021 [2024-11-19 12:37:54.105642] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x569840, cid 5, qid 0 00:17:49.021 [2024-11-19 12:37:54.105702] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.021 [2024-11-19 12:37:54.105726] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.021 [2024-11-19 12:37:54.105730] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.021 [2024-11-19 12:37:54.105734] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x569840) on tqpair=0x522bd0 00:17:49.021 [2024-11-19 12:37:54.105753] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.021 [2024-11-19 12:37:54.105759] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x522bd0) 00:17:49.021 [2024-11-19 12:37:54.105766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.021 [2024-11-19 12:37:54.105774] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.021 [2024-11-19 12:37:54.105778] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x522bd0) 00:17:49.021 [2024-11-19 12:37:54.105785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.021 [2024-11-19 12:37:54.105793] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.021 [2024-11-19 12:37:54.105797] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x522bd0) 00:17:49.021 [2024-11-19 12:37:54.105803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.021 [2024-11-19 12:37:54.105814] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.021 [2024-11-19 12:37:54.105819] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x522bd0) 00:17:49.021 [2024-11-19 12:37:54.105825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.021 [2024-11-19 12:37:54.105847] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x569840, cid 5, qid 0 00:17:49.021 [2024-11-19 12:37:54.105855] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5696c0, cid 4, qid 0 00:17:49.021 [2024-11-19 12:37:54.105860] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5699c0, cid 6, qid 0 00:17:49.021 [2024-11-19 12:37:54.105865] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x569b40, cid 7, qid 0 00:17:49.021 [2024-11-19 12:37:54.106001] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:49.021 [2024-11-19 12:37:54.106009] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:49.021 [2024-11-19 12:37:54.106013] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:49.021 [2024-11-19 12:37:54.106017] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x522bd0): datao=0, datal=8192, cccid=5 00:17:49.021 [2024-11-19 12:37:54.106021] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x569840) on tqpair(0x522bd0): expected_datao=0, payload_size=8192 00:17:49.021 [2024-11-19 12:37:54.106026] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.021 [2024-11-19 12:37:54.106043] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:49.021 [2024-11-19 12:37:54.106048] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:49.021 [2024-11-19 12:37:54.106054] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:49.021 [2024-11-19 12:37:54.106060] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:49.021 [2024-11-19 12:37:54.106064] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:49.021 [2024-11-19 12:37:54.106068] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x522bd0): datao=0, datal=512, cccid=4 00:17:49.021 [2024-11-19 12:37:54.106073] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5696c0) on tqpair(0x522bd0): expected_datao=0, payload_size=512 00:17:49.021 [2024-11-19 12:37:54.106078] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.021 [2024-11-19 12:37:54.106099] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:49.021 [2024-11-19 12:37:54.106103] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:49.021 [2024-11-19 12:37:54.106109] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:49.021 [2024-11-19 12:37:54.106114] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:49.021 [2024-11-19 12:37:54.106118] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:49.021 [2024-11-19 12:37:54.106122] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x522bd0): datao=0, datal=512, cccid=6 00:17:49.021 [2024-11-19 12:37:54.106126] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5699c0) on tqpair(0x522bd0): expected_datao=0, payload_size=512 00:17:49.021 [2024-11-19 12:37:54.106131] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.021 [2024-11-19 12:37:54.106137] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:49.021 [2024-11-19 12:37:54.106140] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:49.021 [2024-11-19 12:37:54.106146] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:49.021 [2024-11-19 12:37:54.106152] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:49.021 [2024-11-19 12:37:54.106155] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:49.021 [2024-11-19 12:37:54.106159] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x522bd0): datao=0, datal=4096, cccid=7 00:17:49.021 [2024-11-19 12:37:54.106163] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x569b40) on tqpair(0x522bd0): expected_datao=0, payload_size=4096 00:17:49.021 [2024-11-19 12:37:54.106168] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.021 [2024-11-19 12:37:54.106174] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:49.021 [2024-11-19 12:37:54.106178] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:49.021 [2024-11-19 12:37:54.106186] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.021 [2024-11-19 12:37:54.106192] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.021 [2024-11-19 12:37:54.106196] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.021 [2024-11-19 12:37:54.106200] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x569840) on tqpair=0x522bd0 00:17:49.021 [2024-11-19 12:37:54.106216] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.021 [2024-11-19 12:37:54.106223] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.021 [2024-11-19 12:37:54.106227] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.021 [2024-11-19 12:37:54.106231] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5696c0) on tqpair=0x522bd0 00:17:49.021 [2024-11-19 12:37:54.106242] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.021 [2024-11-19 12:37:54.106248] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.021 [2024-11-19 12:37:54.106252] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.021 ===================================================== 00:17:49.021 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:49.021 ===================================================== 00:17:49.021 Controller Capabilities/Features 00:17:49.021 ================================ 00:17:49.021 Vendor ID: 8086 00:17:49.021 Subsystem Vendor ID: 8086 00:17:49.021 Serial Number: SPDK00000000000001 00:17:49.021 Model Number: SPDK bdev Controller 00:17:49.021 Firmware Version: 24.09.1 00:17:49.021 Recommended Arb Burst: 6 00:17:49.021 IEEE OUI Identifier: e4 d2 5c 00:17:49.021 Multi-path I/O 00:17:49.022 May have multiple subsystem ports: Yes 00:17:49.022 May have multiple controllers: Yes 00:17:49.022 Associated with SR-IOV VF: No 00:17:49.022 Max Data Transfer Size: 131072 00:17:49.022 Max Number of Namespaces: 32 00:17:49.022 Max Number of I/O Queues: 127 00:17:49.022 NVMe Specification Version (VS): 1.3 00:17:49.022 NVMe Specification Version (Identify): 1.3 00:17:49.022 Maximum Queue Entries: 128 00:17:49.022 Contiguous Queues Required: Yes 00:17:49.022 Arbitration Mechanisms Supported 00:17:49.022 Weighted Round Robin: Not Supported 00:17:49.022 Vendor Specific: Not Supported 00:17:49.022 Reset Timeout: 15000 ms 00:17:49.022 Doorbell Stride: 4 bytes 00:17:49.022 NVM Subsystem Reset: Not Supported 00:17:49.022 Command Sets Supported 00:17:49.022 NVM Command Set: Supported 00:17:49.022 Boot Partition: Not Supported 00:17:49.022 Memory Page Size Minimum: 4096 bytes 00:17:49.022 Memory Page Size Maximum: 4096 bytes 00:17:49.022 Persistent Memory Region: Not Supported 00:17:49.022 Optional Asynchronous Events Supported 00:17:49.022 Namespace Attribute Notices: Supported 00:17:49.022 Firmware Activation Notices: Not Supported 00:17:49.022 ANA Change Notices: Not Supported 00:17:49.022 PLE Aggregate Log Change Notices: Not Supported 00:17:49.022 LBA Status Info Alert Notices: Not Supported 00:17:49.022 EGE Aggregate Log Change Notices: Not Supported 00:17:49.022 Normal NVM Subsystem Shutdown event: Not Supported 00:17:49.022 Zone Descriptor Change Notices: Not Supported 00:17:49.022 Discovery Log Change Notices: Not Supported 00:17:49.022 Controller Attributes 00:17:49.022 128-bit Host Identifier: Supported 00:17:49.022 Non-Operational Permissive Mode: Not Supported 00:17:49.022 NVM Sets: Not Supported 00:17:49.022 Read Recovery Levels: Not Supported 00:17:49.022 Endurance Groups: Not Supported 00:17:49.022 Predictable Latency Mode: Not Supported 00:17:49.022 Traffic Based Keep ALive: Not Supported 00:17:49.022 Namespace Granularity: Not Supported 00:17:49.022 SQ Associations: Not Supported 00:17:49.022 UUID List: Not Supported 00:17:49.022 Multi-Domain Subsystem: Not Supported 00:17:49.022 Fixed Capacity Management: Not Supported 00:17:49.022 Variable Capacity Management: Not Supported 00:17:49.022 Delete Endurance Group: Not Supported 00:17:49.022 Delete NVM Set: Not Supported 00:17:49.022 Extended LBA Formats Supported: Not Supported 00:17:49.022 Flexible Data Placement Supported: Not Supported 00:17:49.022 00:17:49.022 Controller Memory Buffer Support 00:17:49.022 ================================ 00:17:49.022 Supported: No 00:17:49.022 00:17:49.022 Persistent Memory Region Support 00:17:49.022 ================================ 00:17:49.022 Supported: No 00:17:49.022 00:17:49.022 Admin Command Set Attributes 00:17:49.022 ============================ 00:17:49.022 Security Send/Receive: Not Supported 00:17:49.022 Format NVM: Not Supported 00:17:49.022 Firmware Activate/Download: Not Supported 00:17:49.022 Namespace Management: Not Supported 00:17:49.022 Device Self-Test: Not Supported 00:17:49.022 Directives: Not Supported 00:17:49.022 NVMe-MI: Not Supported 00:17:49.022 Virtualization Management: Not Supported 00:17:49.022 Doorbell Buffer Config: Not Supported 00:17:49.022 Get LBA Status Capability: Not Supported 00:17:49.022 Command & Feature Lockdown Capability: Not Supported 00:17:49.022 Abort Command Limit: 4 00:17:49.022 Async Event Request Limit: 4 00:17:49.022 Number of Firmware Slots: N/A 00:17:49.022 Firmware Slot 1 Read-Only: N/A 00:17:49.022 Firmware Activation Without Reset: N/A 00:17:49.022 Multiple Update Detection Support: N/A 00:17:49.022 Firmware Update Granularity: No Information Provided 00:17:49.022 Per-Namespace SMART Log: No 00:17:49.022 Asymmetric Namespace Access Log Page: Not Supported 00:17:49.022 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:17:49.022 Command Effects Log Page: Supported 00:17:49.022 Get Log Page Extended Data: Supported 00:17:49.022 Telemetry Log Pages: Not Supported 00:17:49.022 Persistent Event Log Pages: Not Supported 00:17:49.022 Supported Log Pages Log Page: May Support 00:17:49.022 Commands Supported & Effects Log Page: Not Supported 00:17:49.022 Feature Identifiers & Effects Log Page:May Support 00:17:49.022 NVMe-MI Commands & Effects Log Page: May Support 00:17:49.022 Data Area 4 for Telemetry Log: Not Supported 00:17:49.022 Error Log Page Entries Supported: 128 00:17:49.022 Keep Alive: Supported 00:17:49.022 Keep Alive Granularity: 10000 ms 00:17:49.022 00:17:49.022 NVM Command Set Attributes 00:17:49.022 ========================== 00:17:49.022 Submission Queue Entry Size 00:17:49.022 Max: 64 00:17:49.022 Min: 64 00:17:49.022 Completion Queue Entry Size 00:17:49.022 Max: 16 00:17:49.022 Min: 16 00:17:49.022 Number of Namespaces: 32 00:17:49.022 Compare Command: Supported 00:17:49.022 Write Uncorrectable Command: Not Supported 00:17:49.022 Dataset Management Command: Supported 00:17:49.022 Write Zeroes Command: Supported 00:17:49.022 Set Features Save Field: Not Supported 00:17:49.022 Reservations: Supported 00:17:49.022 Timestamp: Not Supported 00:17:49.022 Copy: Supported 00:17:49.022 Volatile Write Cache: Present 00:17:49.022 Atomic Write Unit (Normal): 1 00:17:49.022 Atomic Write Unit (PFail): 1 00:17:49.022 Atomic Compare & Write Unit: 1 00:17:49.022 Fused Compare & Write: Supported 00:17:49.022 Scatter-Gather List 00:17:49.022 SGL Command Set: Supported 00:17:49.022 SGL Keyed: Supported 00:17:49.022 SGL Bit Bucket Descriptor: Not Supported 00:17:49.022 SGL Metadata Pointer: Not Supported 00:17:49.022 Oversized SGL: Not Supported 00:17:49.022 SGL Metadata Address: Not Supported 00:17:49.022 SGL Offset: Supported 00:17:49.022 Transport SGL Data Block: Not Supported 00:17:49.022 Replay Protected Memory Block: Not Supported 00:17:49.022 00:17:49.022 Firmware Slot Information 00:17:49.022 ========================= 00:17:49.022 Active slot: 1 00:17:49.022 Slot 1 Firmware Revision: 24.09.1 00:17:49.022 00:17:49.022 00:17:49.022 Commands Supported and Effects 00:17:49.022 ============================== 00:17:49.022 Admin Commands 00:17:49.022 -------------- 00:17:49.022 Get Log Page (02h): Supported 00:17:49.022 Identify (06h): Supported 00:17:49.022 Abort (08h): Supported 00:17:49.022 Set Features (09h): Supported 00:17:49.022 Get Features (0Ah): Supported 00:17:49.022 Asynchronous Event Request (0Ch): Supported 00:17:49.022 Keep Alive (18h): Supported 00:17:49.022 I/O Commands 00:17:49.022 ------------ 00:17:49.022 Flush (00h): Supported LBA-Change 00:17:49.022 Write (01h): Supported LBA-Change 00:17:49.022 Read (02h): Supported 00:17:49.022 Compare (05h): Supported 00:17:49.022 Write Zeroes (08h): Supported LBA-Change 00:17:49.022 Dataset Management (09h): Supported LBA-Change 00:17:49.022 Copy (19h): Supported LBA-Change 00:17:49.022 00:17:49.022 Error Log 00:17:49.022 ========= 00:17:49.022 00:17:49.022 Arbitration 00:17:49.022 =========== 00:17:49.022 Arbitration Burst: 1 00:17:49.022 00:17:49.022 Power Management 00:17:49.022 ================ 00:17:49.022 Number of Power States: 1 00:17:49.022 Current Power State: Power State #0 00:17:49.022 Power State #0: 00:17:49.022 Max Power: 0.00 W 00:17:49.022 Non-Operational State: Operational 00:17:49.022 Entry Latency: Not Reported 00:17:49.022 Exit Latency: Not Reported 00:17:49.022 Relative Read Throughput: 0 00:17:49.022 Relative Read Latency: 0 00:17:49.022 Relative Write Throughput: 0 00:17:49.022 Relative Write Latency: 0 00:17:49.022 Idle Power: Not Reported 00:17:49.022 Active Power: Not Reported 00:17:49.022 Non-Operational Permissive Mode: Not Supported 00:17:49.022 00:17:49.022 Health Information 00:17:49.022 ================== 00:17:49.022 Critical Warnings: 00:17:49.022 Available Spare Space: OK 00:17:49.022 Temperature: OK 00:17:49.022 Device Reliability: OK 00:17:49.022 Read Only: No 00:17:49.022 Volatile Memory Backup: OK 00:17:49.022 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:49.022 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:49.022 Available Spare: 0% 00:17:49.022 Available Spare Threshold: 0% 00:17:49.022 Life Percentage U[2024-11-19 12:37:54.106256] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5699c0) on tqpair=0x522bd0 00:17:49.022 [2024-11-19 12:37:54.106263] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.022 [2024-11-19 12:37:54.106269] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.022 [2024-11-19 12:37:54.106273] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.022 [2024-11-19 12:37:54.106277] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x569b40) on tqpair=0x522bd0 00:17:49.022 [2024-11-19 12:37:54.106376] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.022 [2024-11-19 12:37:54.106383] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x522bd0) 00:17:49.023 [2024-11-19 12:37:54.106391] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.023 [2024-11-19 12:37:54.106413] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x569b40, cid 7, qid 0 00:17:49.023 [2024-11-19 12:37:54.106461] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.023 [2024-11-19 12:37:54.106468] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.023 [2024-11-19 12:37:54.106472] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.023 [2024-11-19 12:37:54.106476] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x569b40) on tqpair=0x522bd0 00:17:49.023 [2024-11-19 12:37:54.106513] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:17:49.023 [2024-11-19 12:37:54.106525] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5690c0) on tqpair=0x522bd0 00:17:49.023 [2024-11-19 12:37:54.106532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.023 [2024-11-19 12:37:54.106538] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x569240) on tqpair=0x522bd0 00:17:49.023 [2024-11-19 12:37:54.106542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.023 [2024-11-19 12:37:54.106548] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5693c0) on tqpair=0x522bd0 00:17:49.023 [2024-11-19 12:37:54.106552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.023 [2024-11-19 12:37:54.106557] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x569540) on tqpair=0x522bd0 00:17:49.023 [2024-11-19 12:37:54.106562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:49.023 [2024-11-19 12:37:54.106571] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.023 [2024-11-19 12:37:54.106575] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.023 [2024-11-19 12:37:54.106579] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x522bd0) 00:17:49.023 [2024-11-19 12:37:54.106587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.023 [2024-11-19 12:37:54.106609] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x569540, cid 3, qid 0 00:17:49.023 [2024-11-19 12:37:54.106654] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.023 [2024-11-19 12:37:54.106661] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.023 [2024-11-19 12:37:54.106665] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.023 [2024-11-19 12:37:54.106669] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x569540) on tqpair=0x522bd0 00:17:49.023 [2024-11-19 12:37:54.106692] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.023 [2024-11-19 12:37:54.106697] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.023 [2024-11-19 12:37:54.106701] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x522bd0) 00:17:49.023 [2024-11-19 12:37:54.110757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.023 [2024-11-19 12:37:54.110806] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x569540, cid 3, qid 0 00:17:49.023 [2024-11-19 12:37:54.110873] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.023 [2024-11-19 12:37:54.110881] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.023 [2024-11-19 12:37:54.110884] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.023 [2024-11-19 12:37:54.110889] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x569540) on tqpair=0x522bd0 00:17:49.023 [2024-11-19 12:37:54.110894] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:17:49.023 [2024-11-19 12:37:54.110899] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:17:49.023 [2024-11-19 12:37:54.110910] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.023 [2024-11-19 12:37:54.110915] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.023 [2024-11-19 12:37:54.110919] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x522bd0) 00:17:49.023 [2024-11-19 12:37:54.110928] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.023 [2024-11-19 12:37:54.110951] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x569540, cid 3, qid 0 00:17:49.023 [2024-11-19 12:37:54.110995] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.023 [2024-11-19 12:37:54.111002] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.023 [2024-11-19 12:37:54.111006] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.023 [2024-11-19 12:37:54.111010] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x569540) on tqpair=0x522bd0 00:17:49.023 [2024-11-19 12:37:54.111021] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.023 [2024-11-19 12:37:54.111025] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.023 [2024-11-19 12:37:54.111029] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x522bd0) 00:17:49.023 [2024-11-19 12:37:54.111036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.023 [2024-11-19 12:37:54.111053] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x569540, cid 3, qid 0 00:17:49.023 [2024-11-19 12:37:54.111094] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.023 [2024-11-19 12:37:54.111101] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.023 [2024-11-19 12:37:54.111105] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.023 [2024-11-19 12:37:54.111109] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x569540) on tqpair=0x522bd0 00:17:49.023 [2024-11-19 12:37:54.111119] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.023 [2024-11-19 12:37:54.111124] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.023 [2024-11-19 12:37:54.111136] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x522bd0) 00:17:49.023 [2024-11-19 12:37:54.111160] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.023 [2024-11-19 12:37:54.111179] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x569540, cid 3, qid 0 00:17:49.023 [2024-11-19 12:37:54.111226] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.023 [2024-11-19 12:37:54.111233] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.023 [2024-11-19 12:37:54.111237] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.023 [2024-11-19 12:37:54.111242] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x569540) on tqpair=0x522bd0 00:17:49.023 [2024-11-19 12:37:54.111253] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.023 [2024-11-19 12:37:54.111258] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.023 [2024-11-19 12:37:54.111262] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x522bd0) 00:17:49.023 [2024-11-19 12:37:54.111270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.023 [2024-11-19 12:37:54.111287] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x569540, cid 3, qid 0 00:17:49.023 [2024-11-19 12:37:54.111332] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.023 [2024-11-19 12:37:54.111339] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.023 [2024-11-19 12:37:54.111343] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.023 [2024-11-19 12:37:54.111348] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x569540) on tqpair=0x522bd0 00:17:49.023 [2024-11-19 12:37:54.111358] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.023 [2024-11-19 12:37:54.111363] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.023 [2024-11-19 12:37:54.111367] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x522bd0) 00:17:49.023 [2024-11-19 12:37:54.111375] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.023 [2024-11-19 12:37:54.111392] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x569540, cid 3, qid 0 00:17:49.023 [2024-11-19 12:37:54.111436] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.023 [2024-11-19 12:37:54.111443] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.023 [2024-11-19 12:37:54.111447] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.023 [2024-11-19 12:37:54.111452] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x569540) on tqpair=0x522bd0 00:17:49.023 [2024-11-19 12:37:54.111463] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.023 [2024-11-19 12:37:54.111482] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.023 [2024-11-19 12:37:54.111486] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x522bd0) 00:17:49.023 [2024-11-19 12:37:54.111493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.023 [2024-11-19 12:37:54.111524] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x569540, cid 3, qid 0 00:17:49.023 [2024-11-19 12:37:54.111565] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.024 [2024-11-19 12:37:54.111572] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.024 [2024-11-19 12:37:54.111575] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.024 [2024-11-19 12:37:54.111580] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x569540) on tqpair=0x522bd0 00:17:49.024 [2024-11-19 12:37:54.111589] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.024 [2024-11-19 12:37:54.111594] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.024 [2024-11-19 12:37:54.111598] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x522bd0) 00:17:49.024 [2024-11-19 12:37:54.111605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.024 [2024-11-19 12:37:54.111621] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x569540, cid 3, qid 0 00:17:49.024 [2024-11-19 12:37:54.111668] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.024 [2024-11-19 12:37:54.111675] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.024 [2024-11-19 12:37:54.111678] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.024 [2024-11-19 12:37:54.111682] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x569540) on tqpair=0x522bd0 00:17:49.024 [2024-11-19 12:37:54.111692] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.024 [2024-11-19 12:37:54.111697] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.024 [2024-11-19 12:37:54.111701] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x522bd0) 00:17:49.024 [2024-11-19 12:37:54.111722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.024 [2024-11-19 12:37:54.111742] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x569540, cid 3, qid 0 00:17:49.024 [2024-11-19 12:37:54.111793] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.024 [2024-11-19 12:37:54.111800] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.024 [2024-11-19 12:37:54.111803] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.024 [2024-11-19 12:37:54.111808] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x569540) on tqpair=0x522bd0 00:17:49.024 [2024-11-19 12:37:54.111818] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.024 [2024-11-19 12:37:54.111822] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.024 [2024-11-19 12:37:54.111826] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x522bd0) 00:17:49.024 [2024-11-19 12:37:54.111834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.024 [2024-11-19 12:37:54.111850] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x569540, cid 3, qid 0 00:17:49.024 [2024-11-19 12:37:54.111894] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.024 [2024-11-19 12:37:54.111900] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.024 [2024-11-19 12:37:54.111904] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.024 [2024-11-19 12:37:54.111908] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x569540) on tqpair=0x522bd0 00:17:49.024 [2024-11-19 12:37:54.111918] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.024 [2024-11-19 12:37:54.111923] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.024 [2024-11-19 12:37:54.111927] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x522bd0) 00:17:49.024 [2024-11-19 12:37:54.111934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.024 [2024-11-19 12:37:54.111950] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x569540, cid 3, qid 0 00:17:49.024 [2024-11-19 12:37:54.111994] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.024 [2024-11-19 12:37:54.112001] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.024 [2024-11-19 12:37:54.112004] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.024 [2024-11-19 12:37:54.112009] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x569540) on tqpair=0x522bd0 00:17:49.024 [2024-11-19 12:37:54.112019] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.024 [2024-11-19 12:37:54.112023] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.024 [2024-11-19 12:37:54.112027] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x522bd0) 00:17:49.024 [2024-11-19 12:37:54.112034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.024 [2024-11-19 12:37:54.112050] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x569540, cid 3, qid 0 00:17:49.024 [2024-11-19 12:37:54.112094] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.024 [2024-11-19 12:37:54.112101] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.024 [2024-11-19 12:37:54.112105] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.024 [2024-11-19 12:37:54.112109] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x569540) on tqpair=0x522bd0 00:17:49.024 [2024-11-19 12:37:54.112119] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.024 [2024-11-19 12:37:54.112124] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.024 [2024-11-19 12:37:54.112128] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x522bd0) 00:17:49.024 [2024-11-19 12:37:54.112135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.024 [2024-11-19 12:37:54.112151] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x569540, cid 3, qid 0 00:17:49.024 [2024-11-19 12:37:54.112195] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.024 [2024-11-19 12:37:54.112202] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.024 [2024-11-19 12:37:54.112205] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.024 [2024-11-19 12:37:54.112209] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x569540) on tqpair=0x522bd0 00:17:49.024 [2024-11-19 12:37:54.112219] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.024 [2024-11-19 12:37:54.112224] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.024 [2024-11-19 12:37:54.112228] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x522bd0) 00:17:49.024 [2024-11-19 12:37:54.112235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.024 [2024-11-19 12:37:54.112252] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x569540, cid 3, qid 0 00:17:49.024 [2024-11-19 12:37:54.112295] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.024 [2024-11-19 12:37:54.112302] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.024 [2024-11-19 12:37:54.112305] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.024 [2024-11-19 12:37:54.112309] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x569540) on tqpair=0x522bd0 00:17:49.024 [2024-11-19 12:37:54.112319] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.024 [2024-11-19 12:37:54.112324] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.024 [2024-11-19 12:37:54.112328] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x522bd0) 00:17:49.024 [2024-11-19 12:37:54.112335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.024 [2024-11-19 12:37:54.112351] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x569540, cid 3, qid 0 00:17:49.024 [2024-11-19 12:37:54.112399] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.024 [2024-11-19 12:37:54.112406] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.024 [2024-11-19 12:37:54.112410] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.024 [2024-11-19 12:37:54.112414] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x569540) on tqpair=0x522bd0 00:17:49.024 [2024-11-19 12:37:54.112424] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.024 [2024-11-19 12:37:54.112429] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.024 [2024-11-19 12:37:54.112433] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x522bd0) 00:17:49.024 [2024-11-19 12:37:54.112440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.024 [2024-11-19 12:37:54.112456] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x569540, cid 3, qid 0 00:17:49.024 [2024-11-19 12:37:54.112497] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.024 [2024-11-19 12:37:54.112503] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.024 [2024-11-19 12:37:54.112507] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.024 [2024-11-19 12:37:54.112511] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x569540) on tqpair=0x522bd0 00:17:49.024 [2024-11-19 12:37:54.112521] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.024 [2024-11-19 12:37:54.112526] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.024 [2024-11-19 12:37:54.112530] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x522bd0) 00:17:49.024 [2024-11-19 12:37:54.112538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.024 [2024-11-19 12:37:54.112554] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x569540, cid 3, qid 0 00:17:49.024 [2024-11-19 12:37:54.112600] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.024 [2024-11-19 12:37:54.112607] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.024 [2024-11-19 12:37:54.112611] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.024 [2024-11-19 12:37:54.112615] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x569540) on tqpair=0x522bd0 00:17:49.024 [2024-11-19 12:37:54.112625] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.024 [2024-11-19 12:37:54.112629] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.024 [2024-11-19 12:37:54.112633] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x522bd0) 00:17:49.024 [2024-11-19 12:37:54.112640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.024 [2024-11-19 12:37:54.112657] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x569540, cid 3, qid 0 00:17:49.024 [2024-11-19 12:37:54.112714] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.024 [2024-11-19 12:37:54.112722] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.024 [2024-11-19 12:37:54.112726] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.024 [2024-11-19 12:37:54.112730] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x569540) on tqpair=0x522bd0 00:17:49.024 [2024-11-19 12:37:54.112740] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.024 [2024-11-19 12:37:54.112745] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.024 [2024-11-19 12:37:54.112749] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x522bd0) 00:17:49.024 [2024-11-19 12:37:54.112756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.025 [2024-11-19 12:37:54.112774] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x569540, cid 3, qid 0 00:17:49.025 [2024-11-19 12:37:54.112816] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.025 [2024-11-19 12:37:54.112823] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.025 [2024-11-19 12:37:54.112826] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.025 [2024-11-19 12:37:54.112830] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x569540) on tqpair=0x522bd0 00:17:49.025 [2024-11-19 12:37:54.112840] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.025 [2024-11-19 12:37:54.112845] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.025 [2024-11-19 12:37:54.112849] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x522bd0) 00:17:49.025 [2024-11-19 12:37:54.112856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.025 [2024-11-19 12:37:54.112872] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x569540, cid 3, qid 0 00:17:49.025 [2024-11-19 12:37:54.112919] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.025 [2024-11-19 12:37:54.112925] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.025 [2024-11-19 12:37:54.112929] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.025 [2024-11-19 12:37:54.112933] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x569540) on tqpair=0x522bd0 00:17:49.025 [2024-11-19 12:37:54.112943] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.025 [2024-11-19 12:37:54.112948] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.025 [2024-11-19 12:37:54.112951] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x522bd0) 00:17:49.025 [2024-11-19 12:37:54.112959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.025 [2024-11-19 12:37:54.112975] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x569540, cid 3, qid 0 00:17:49.025 [2024-11-19 12:37:54.113018] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.025 [2024-11-19 12:37:54.113025] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.025 [2024-11-19 12:37:54.113029] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.025 [2024-11-19 12:37:54.113033] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x569540) on tqpair=0x522bd0 00:17:49.025 [2024-11-19 12:37:54.113043] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.025 [2024-11-19 12:37:54.113047] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.025 [2024-11-19 12:37:54.113051] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x522bd0) 00:17:49.025 [2024-11-19 12:37:54.113058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.025 [2024-11-19 12:37:54.113074] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x569540, cid 3, qid 0 00:17:49.025 [2024-11-19 12:37:54.113124] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.025 [2024-11-19 12:37:54.113135] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.025 [2024-11-19 12:37:54.113140] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.025 [2024-11-19 12:37:54.113144] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x569540) on tqpair=0x522bd0 00:17:49.025 [2024-11-19 12:37:54.113155] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.025 [2024-11-19 12:37:54.113160] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.025 [2024-11-19 12:37:54.113163] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x522bd0) 00:17:49.025 [2024-11-19 12:37:54.113171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.025 [2024-11-19 12:37:54.113189] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x569540, cid 3, qid 0 00:17:49.025 [2024-11-19 12:37:54.113231] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.025 [2024-11-19 12:37:54.113239] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.025 [2024-11-19 12:37:54.113242] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.025 [2024-11-19 12:37:54.113247] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x569540) on tqpair=0x522bd0 00:17:49.025 [2024-11-19 12:37:54.113257] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.025 [2024-11-19 12:37:54.113262] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.025 [2024-11-19 12:37:54.113266] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x522bd0) 00:17:49.025 [2024-11-19 12:37:54.113273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.025 [2024-11-19 12:37:54.113289] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x569540, cid 3, qid 0 00:17:49.025 [2024-11-19 12:37:54.113330] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.025 [2024-11-19 12:37:54.113337] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.025 [2024-11-19 12:37:54.113340] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.025 [2024-11-19 12:37:54.113344] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x569540) on tqpair=0x522bd0 00:17:49.025 [2024-11-19 12:37:54.113354] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.025 [2024-11-19 12:37:54.113359] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.025 [2024-11-19 12:37:54.113363] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x522bd0) 00:17:49.025 [2024-11-19 12:37:54.113370] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.025 [2024-11-19 12:37:54.113387] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x569540, cid 3, qid 0 00:17:49.025 [2024-11-19 12:37:54.113436] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.025 [2024-11-19 12:37:54.113443] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.025 [2024-11-19 12:37:54.113446] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.025 [2024-11-19 12:37:54.113450] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x569540) on tqpair=0x522bd0 00:17:49.025 [2024-11-19 12:37:54.113461] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.025 [2024-11-19 12:37:54.113465] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.025 [2024-11-19 12:37:54.113469] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x522bd0) 00:17:49.025 [2024-11-19 12:37:54.113476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.025 [2024-11-19 12:37:54.113492] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x569540, cid 3, qid 0 00:17:49.025 [2024-11-19 12:37:54.113537] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.025 [2024-11-19 12:37:54.113543] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.025 [2024-11-19 12:37:54.113547] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.025 [2024-11-19 12:37:54.113551] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x569540) on tqpair=0x522bd0 00:17:49.025 [2024-11-19 12:37:54.113561] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.025 [2024-11-19 12:37:54.113566] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.025 [2024-11-19 12:37:54.113570] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x522bd0) 00:17:49.025 [2024-11-19 12:37:54.113577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.025 [2024-11-19 12:37:54.113594] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x569540, cid 3, qid 0 00:17:49.025 [2024-11-19 12:37:54.113635] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.025 [2024-11-19 12:37:54.113641] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.025 [2024-11-19 12:37:54.113645] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.025 [2024-11-19 12:37:54.113649] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x569540) on tqpair=0x522bd0 00:17:49.025 [2024-11-19 12:37:54.113660] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.025 [2024-11-19 12:37:54.113674] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.025 [2024-11-19 12:37:54.113695] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x522bd0) 00:17:49.025 [2024-11-19 12:37:54.113719] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.025 [2024-11-19 12:37:54.113738] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x569540, cid 3, qid 0 00:17:49.025 [2024-11-19 12:37:54.113788] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.025 [2024-11-19 12:37:54.113796] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.025 [2024-11-19 12:37:54.113800] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.025 [2024-11-19 12:37:54.113804] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x569540) on tqpair=0x522bd0 00:17:49.025 [2024-11-19 12:37:54.113815] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.025 [2024-11-19 12:37:54.113820] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.025 [2024-11-19 12:37:54.113824] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x522bd0) 00:17:49.025 [2024-11-19 12:37:54.113832] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.025 [2024-11-19 12:37:54.113857] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x569540, cid 3, qid 0 00:17:49.025 [2024-11-19 12:37:54.113904] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.025 [2024-11-19 12:37:54.113921] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.025 [2024-11-19 12:37:54.113926] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.025 [2024-11-19 12:37:54.113931] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x569540) on tqpair=0x522bd0 00:17:49.025 [2024-11-19 12:37:54.113942] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.025 [2024-11-19 12:37:54.113947] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.025 [2024-11-19 12:37:54.113951] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x522bd0) 00:17:49.025 [2024-11-19 12:37:54.113959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.025 [2024-11-19 12:37:54.113978] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x569540, cid 3, qid 0 00:17:49.025 [2024-11-19 12:37:54.114028] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.025 [2024-11-19 12:37:54.114057] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.025 [2024-11-19 12:37:54.114062] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.025 [2024-11-19 12:37:54.114066] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x569540) on tqpair=0x522bd0 00:17:49.025 [2024-11-19 12:37:54.114077] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.025 [2024-11-19 12:37:54.114097] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.025 [2024-11-19 12:37:54.114101] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x522bd0) 00:17:49.025 [2024-11-19 12:37:54.114108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.026 [2024-11-19 12:37:54.114126] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x569540, cid 3, qid 0 00:17:49.026 [2024-11-19 12:37:54.114170] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.026 [2024-11-19 12:37:54.114176] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.026 [2024-11-19 12:37:54.114180] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.026 [2024-11-19 12:37:54.114184] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x569540) on tqpair=0x522bd0 00:17:49.026 [2024-11-19 12:37:54.114194] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.026 [2024-11-19 12:37:54.114199] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.026 [2024-11-19 12:37:54.114202] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x522bd0) 00:17:49.026 [2024-11-19 12:37:54.114210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.026 [2024-11-19 12:37:54.114226] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x569540, cid 3, qid 0 00:17:49.026 [2024-11-19 12:37:54.114271] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.026 [2024-11-19 12:37:54.114281] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.026 [2024-11-19 12:37:54.114286] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.026 [2024-11-19 12:37:54.114290] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x569540) on tqpair=0x522bd0 00:17:49.026 [2024-11-19 12:37:54.114300] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.026 [2024-11-19 12:37:54.114305] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.026 [2024-11-19 12:37:54.114309] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x522bd0) 00:17:49.026 [2024-11-19 12:37:54.114316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.026 [2024-11-19 12:37:54.114333] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x569540, cid 3, qid 0 00:17:49.026 [2024-11-19 12:37:54.114380] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.026 [2024-11-19 12:37:54.114387] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.026 [2024-11-19 12:37:54.114390] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.026 [2024-11-19 12:37:54.114395] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x569540) on tqpair=0x522bd0 00:17:49.026 [2024-11-19 12:37:54.114405] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.026 [2024-11-19 12:37:54.114409] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.026 [2024-11-19 12:37:54.114413] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x522bd0) 00:17:49.026 [2024-11-19 12:37:54.114421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.026 [2024-11-19 12:37:54.114437] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x569540, cid 3, qid 0 00:17:49.026 [2024-11-19 12:37:54.114481] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.026 [2024-11-19 12:37:54.114487] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.026 [2024-11-19 12:37:54.114491] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.026 [2024-11-19 12:37:54.114495] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x569540) on tqpair=0x522bd0 00:17:49.026 [2024-11-19 12:37:54.114505] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.026 [2024-11-19 12:37:54.114510] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.026 [2024-11-19 12:37:54.114514] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x522bd0) 00:17:49.026 [2024-11-19 12:37:54.114521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.026 [2024-11-19 12:37:54.114537] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x569540, cid 3, qid 0 00:17:49.026 [2024-11-19 12:37:54.114582] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.026 [2024-11-19 12:37:54.114592] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.026 [2024-11-19 12:37:54.114597] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.026 [2024-11-19 12:37:54.114601] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x569540) on tqpair=0x522bd0 00:17:49.026 [2024-11-19 12:37:54.114611] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.026 [2024-11-19 12:37:54.114616] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.026 [2024-11-19 12:37:54.114620] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x522bd0) 00:17:49.026 [2024-11-19 12:37:54.114627] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.026 [2024-11-19 12:37:54.114644] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x569540, cid 3, qid 0 00:17:49.026 [2024-11-19 12:37:54.117726] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.026 [2024-11-19 12:37:54.117746] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.026 [2024-11-19 12:37:54.117751] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.026 [2024-11-19 12:37:54.117771] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x569540) on tqpair=0x522bd0 00:17:49.026 [2024-11-19 12:37:54.117786] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:49.026 [2024-11-19 12:37:54.117792] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:49.026 [2024-11-19 12:37:54.117796] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x522bd0) 00:17:49.026 [2024-11-19 12:37:54.117804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:49.026 [2024-11-19 12:37:54.117830] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x569540, cid 3, qid 0 00:17:49.026 [2024-11-19 12:37:54.117884] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:49.026 [2024-11-19 12:37:54.117891] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:49.026 [2024-11-19 12:37:54.117894] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:49.026 [2024-11-19 12:37:54.117899] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x569540) on tqpair=0x522bd0 00:17:49.026 [2024-11-19 12:37:54.117916] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:17:49.026 sed: 0% 00:17:49.026 Data Units Read: 0 00:17:49.026 Data Units Written: 0 00:17:49.026 Host Read Commands: 0 00:17:49.026 Host Write Commands: 0 00:17:49.026 Controller Busy Time: 0 minutes 00:17:49.026 Power Cycles: 0 00:17:49.026 Power On Hours: 0 hours 00:17:49.026 Unsafe Shutdowns: 0 00:17:49.026 Unrecoverable Media Errors: 0 00:17:49.026 Lifetime Error Log Entries: 0 00:17:49.026 Warning Temperature Time: 0 minutes 00:17:49.026 Critical Temperature Time: 0 minutes 00:17:49.026 00:17:49.026 Number of Queues 00:17:49.026 ================ 00:17:49.026 Number of I/O Submission Queues: 127 00:17:49.026 Number of I/O Completion Queues: 127 00:17:49.026 00:17:49.026 Active Namespaces 00:17:49.026 ================= 00:17:49.026 Namespace ID:1 00:17:49.026 Error Recovery Timeout: Unlimited 00:17:49.026 Command Set Identifier: NVM (00h) 00:17:49.026 Deallocate: Supported 00:17:49.026 Deallocated/Unwritten Error: Not Supported 00:17:49.026 Deallocated Read Value: Unknown 00:17:49.026 Deallocate in Write Zeroes: Not Supported 00:17:49.026 Deallocated Guard Field: 0xFFFF 00:17:49.026 Flush: Supported 00:17:49.026 Reservation: Supported 00:17:49.026 Namespace Sharing Capabilities: Multiple Controllers 00:17:49.026 Size (in LBAs): 131072 (0GiB) 00:17:49.026 Capacity (in LBAs): 131072 (0GiB) 00:17:49.026 Utilization (in LBAs): 131072 (0GiB) 00:17:49.026 NGUID: ABCDEF0123456789ABCDEF0123456789 00:17:49.026 EUI64: ABCDEF0123456789 00:17:49.026 UUID: b59cefb2-57e9-418d-932d-5fa668cef50b 00:17:49.026 Thin Provisioning: Not Supported 00:17:49.026 Per-NS Atomic Units: Yes 00:17:49.026 Atomic Boundary Size (Normal): 0 00:17:49.026 Atomic Boundary Size (PFail): 0 00:17:49.026 Atomic Boundary Offset: 0 00:17:49.026 Maximum Single Source Range Length: 65535 00:17:49.026 Maximum Copy Length: 65535 00:17:49.026 Maximum Source Range Count: 1 00:17:49.026 NGUID/EUI64 Never Reused: No 00:17:49.026 Namespace Write Protected: No 00:17:49.026 Number of LBA Formats: 1 00:17:49.026 Current LBA Format: LBA Format #00 00:17:49.026 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:49.026 00:17:49.026 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:17:49.026 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:49.026 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.026 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:49.026 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.026 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:17:49.026 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:17:49.026 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:49.026 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:17:49.026 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:49.026 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:17:49.026 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:49.026 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:49.026 rmmod nvme_tcp 00:17:49.026 rmmod nvme_fabrics 00:17:49.026 rmmod nvme_keyring 00:17:49.026 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:49.026 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:17:49.026 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:17:49.026 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@513 -- # '[' -n 88878 ']' 00:17:49.026 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # killprocess 88878 00:17:49.026 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 88878 ']' 00:17:49.026 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 88878 00:17:49.286 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:17:49.286 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:49.286 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88878 00:17:49.286 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:49.286 killing process with pid 88878 00:17:49.286 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:49.286 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88878' 00:17:49.286 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 88878 00:17:49.286 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 88878 00:17:49.286 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:49.286 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:49.286 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:49.286 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:17:49.286 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-save 00:17:49.286 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:49.286 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-restore 00:17:49.286 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:49.286 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:49.286 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:49.286 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:49.286 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:49.286 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:49.286 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:49.286 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:49.286 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:49.286 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:49.546 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:49.546 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:49.546 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:49.546 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:49.546 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:49.546 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:49.546 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:49.546 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:49.546 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:49.546 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:17:49.546 00:17:49.546 real 0m2.084s 00:17:49.546 user 0m4.115s 00:17:49.546 sys 0m0.682s 00:17:49.546 ************************************ 00:17:49.546 END TEST nvmf_identify 00:17:49.546 ************************************ 00:17:49.546 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:49.546 12:37:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:49.546 12:37:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:49.546 12:37:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:49.546 12:37:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:49.546 12:37:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.546 ************************************ 00:17:49.546 START TEST nvmf_perf 00:17:49.546 ************************************ 00:17:49.546 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:49.807 * Looking for test storage... 00:17:49.807 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:49.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.807 --rc genhtml_branch_coverage=1 00:17:49.807 --rc genhtml_function_coverage=1 00:17:49.807 --rc genhtml_legend=1 00:17:49.807 --rc geninfo_all_blocks=1 00:17:49.807 --rc geninfo_unexecuted_blocks=1 00:17:49.807 00:17:49.807 ' 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:49.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.807 --rc genhtml_branch_coverage=1 00:17:49.807 --rc genhtml_function_coverage=1 00:17:49.807 --rc genhtml_legend=1 00:17:49.807 --rc geninfo_all_blocks=1 00:17:49.807 --rc geninfo_unexecuted_blocks=1 00:17:49.807 00:17:49.807 ' 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:49.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.807 --rc genhtml_branch_coverage=1 00:17:49.807 --rc genhtml_function_coverage=1 00:17:49.807 --rc genhtml_legend=1 00:17:49.807 --rc geninfo_all_blocks=1 00:17:49.807 --rc geninfo_unexecuted_blocks=1 00:17:49.807 00:17:49.807 ' 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:49.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.807 --rc genhtml_branch_coverage=1 00:17:49.807 --rc genhtml_function_coverage=1 00:17:49.807 --rc genhtml_legend=1 00:17:49.807 --rc geninfo_all_blocks=1 00:17:49.807 --rc geninfo_unexecuted_blocks=1 00:17:49.807 00:17:49.807 ' 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.807 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:49.808 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@456 -- # nvmf_veth_init 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:49.808 Cannot find device "nvmf_init_br" 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:49.808 Cannot find device "nvmf_init_br2" 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:49.808 Cannot find device "nvmf_tgt_br" 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:17:49.808 12:37:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:49.808 Cannot find device "nvmf_tgt_br2" 00:17:49.808 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:17:49.808 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:49.808 Cannot find device "nvmf_init_br" 00:17:49.808 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:17:49.808 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:49.808 Cannot find device "nvmf_init_br2" 00:17:49.808 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:17:49.808 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:49.808 Cannot find device "nvmf_tgt_br" 00:17:49.808 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:17:49.808 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:49.808 Cannot find device "nvmf_tgt_br2" 00:17:49.808 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:17:49.808 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:49.808 Cannot find device "nvmf_br" 00:17:50.068 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:17:50.068 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:50.068 Cannot find device "nvmf_init_if" 00:17:50.068 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:17:50.068 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:50.068 Cannot find device "nvmf_init_if2" 00:17:50.068 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:17:50.068 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:50.068 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:50.068 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:17:50.068 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:50.068 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:50.068 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:17:50.068 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:50.068 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:50.068 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:50.068 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:50.068 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:50.068 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:50.068 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:50.068 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:50.068 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:50.068 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:50.068 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:50.068 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:50.068 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:50.068 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:50.068 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:50.068 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:50.068 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:50.068 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:50.068 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:50.068 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:50.068 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:50.068 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:50.068 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:50.068 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:50.068 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:50.068 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:50.068 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:50.068 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:50.068 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:50.068 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:50.068 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:50.068 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:50.068 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:50.069 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:50.069 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:17:50.069 00:17:50.069 --- 10.0.0.3 ping statistics --- 00:17:50.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.069 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:17:50.069 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:50.069 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:50.069 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.038 ms 00:17:50.069 00:17:50.069 --- 10.0.0.4 ping statistics --- 00:17:50.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.069 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:17:50.069 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:50.069 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:50.069 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:17:50.069 00:17:50.069 --- 10.0.0.1 ping statistics --- 00:17:50.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.069 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:17:50.069 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:50.069 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:50.069 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:17:50.069 00:17:50.069 --- 10.0.0.2 ping statistics --- 00:17:50.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.069 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:17:50.069 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:50.069 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@457 -- # return 0 00:17:50.069 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:50.069 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:50.069 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:50.069 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:50.069 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:50.069 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:50.069 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:50.328 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:17:50.328 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:50.328 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:50.328 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:50.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.328 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:50.328 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # nvmfpid=89126 00:17:50.328 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # waitforlisten 89126 00:17:50.329 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 89126 ']' 00:17:50.329 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.329 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:50.329 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.329 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:50.329 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:50.329 [2024-11-19 12:37:55.398465] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:50.329 [2024-11-19 12:37:55.398785] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:50.329 [2024-11-19 12:37:55.539615] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:50.329 [2024-11-19 12:37:55.576451] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:50.329 [2024-11-19 12:37:55.576704] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:50.329 [2024-11-19 12:37:55.576836] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:50.329 [2024-11-19 12:37:55.577008] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:50.329 [2024-11-19 12:37:55.577043] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:50.329 [2024-11-19 12:37:55.577441] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:50.329 [2024-11-19 12:37:55.577561] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:50.329 [2024-11-19 12:37:55.577634] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:50.329 [2024-11-19 12:37:55.577638] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.588 [2024-11-19 12:37:55.608063] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:50.588 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:50.588 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:17:50.588 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:50.588 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:50.588 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:50.588 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:50.588 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:50.588 12:37:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:17:50.846 12:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:17:50.846 12:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:17:51.415 12:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:17:51.415 12:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:51.675 12:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:17:51.675 12:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:17:51.675 12:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:17:51.675 12:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:17:51.675 12:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:51.675 [2024-11-19 12:37:56.913284] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:51.934 12:37:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:51.934 12:37:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:51.934 12:37:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:52.193 12:37:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:52.193 12:37:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:17:52.452 12:37:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:52.710 [2024-11-19 12:37:57.930547] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:52.710 12:37:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:52.969 12:37:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:52.969 12:37:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:52.969 12:37:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:17:52.969 12:37:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:54.346 Initializing NVMe Controllers 00:17:54.346 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:17:54.346 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:17:54.347 Initialization complete. Launching workers. 00:17:54.347 ======================================================== 00:17:54.347 Latency(us) 00:17:54.347 Device Information : IOPS MiB/s Average min max 00:17:54.347 PCIE (0000:00:10.0) NSID 1 from core 0: 24412.75 95.36 1310.66 354.93 8008.57 00:17:54.347 ======================================================== 00:17:54.347 Total : 24412.75 95.36 1310.66 354.93 8008.57 00:17:54.347 00:17:54.347 12:37:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:55.283 Initializing NVMe Controllers 00:17:55.283 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:55.283 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:55.283 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:55.283 Initialization complete. Launching workers. 00:17:55.283 ======================================================== 00:17:55.283 Latency(us) 00:17:55.283 Device Information : IOPS MiB/s Average min max 00:17:55.284 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3922.00 15.32 254.58 95.03 7171.85 00:17:55.284 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 122.00 0.48 8261.21 7003.35 14967.61 00:17:55.284 ======================================================== 00:17:55.284 Total : 4044.00 15.80 496.13 95.03 14967.61 00:17:55.284 00:17:55.543 12:38:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:56.920 Initializing NVMe Controllers 00:17:56.920 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:56.920 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:56.920 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:56.920 Initialization complete. Launching workers. 00:17:56.920 ======================================================== 00:17:56.920 Latency(us) 00:17:56.920 Device Information : IOPS MiB/s Average min max 00:17:56.920 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9116.65 35.61 3510.50 463.41 9074.00 00:17:56.920 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3957.92 15.46 8098.73 6858.35 16668.56 00:17:56.920 ======================================================== 00:17:56.920 Total : 13074.57 51.07 4899.44 463.41 16668.56 00:17:56.920 00:17:56.920 12:38:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:17:56.920 12:38:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:59.455 Initializing NVMe Controllers 00:17:59.455 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:59.455 Controller IO queue size 128, less than required. 00:17:59.455 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:59.455 Controller IO queue size 128, less than required. 00:17:59.455 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:59.456 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:59.456 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:59.456 Initialization complete. Launching workers. 00:17:59.456 ======================================================== 00:17:59.456 Latency(us) 00:17:59.456 Device Information : IOPS MiB/s Average min max 00:17:59.456 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1889.83 472.46 68809.96 33307.05 95200.41 00:17:59.456 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 674.40 168.60 192891.58 49753.48 313637.11 00:17:59.456 ======================================================== 00:17:59.456 Total : 2564.23 641.06 101443.93 33307.05 313637.11 00:17:59.456 00:17:59.456 12:38:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:17:59.714 Initializing NVMe Controllers 00:17:59.714 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:59.714 Controller IO queue size 128, less than required. 00:17:59.714 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:59.714 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:17:59.714 Controller IO queue size 128, less than required. 00:17:59.714 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:59.714 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:17:59.714 WARNING: Some requested NVMe devices were skipped 00:17:59.714 No valid NVMe controllers or AIO or URING devices found 00:17:59.715 12:38:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:18:02.250 Initializing NVMe Controllers 00:18:02.250 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:02.250 Controller IO queue size 128, less than required. 00:18:02.250 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:02.250 Controller IO queue size 128, less than required. 00:18:02.250 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:02.250 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:02.250 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:02.250 Initialization complete. Launching workers. 00:18:02.250 00:18:02.250 ==================== 00:18:02.250 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:18:02.250 TCP transport: 00:18:02.250 polls: 9994 00:18:02.250 idle_polls: 6239 00:18:02.250 sock_completions: 3755 00:18:02.250 nvme_completions: 7193 00:18:02.250 submitted_requests: 10860 00:18:02.250 queued_requests: 1 00:18:02.250 00:18:02.250 ==================== 00:18:02.250 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:18:02.250 TCP transport: 00:18:02.250 polls: 9473 00:18:02.250 idle_polls: 5117 00:18:02.250 sock_completions: 4356 00:18:02.250 nvme_completions: 6955 00:18:02.250 submitted_requests: 10398 00:18:02.250 queued_requests: 1 00:18:02.250 ======================================================== 00:18:02.250 Latency(us) 00:18:02.250 Device Information : IOPS MiB/s Average min max 00:18:02.250 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1797.95 449.49 72133.18 36255.18 115599.38 00:18:02.250 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1738.45 434.61 74304.39 30309.27 137398.67 00:18:02.250 ======================================================== 00:18:02.250 Total : 3536.40 884.10 73200.52 30309.27 137398.67 00:18:02.250 00:18:02.250 12:38:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:18:02.250 12:38:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:02.509 12:38:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:18:02.509 12:38:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:18:02.509 12:38:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:18:02.768 12:38:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=d061b924-6770-4d1f-a6b1-c96dffb94beb 00:18:02.768 12:38:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb d061b924-6770-4d1f-a6b1-c96dffb94beb 00:18:02.768 12:38:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=d061b924-6770-4d1f-a6b1-c96dffb94beb 00:18:02.768 12:38:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:18:02.768 12:38:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:18:02.768 12:38:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:18:02.768 12:38:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:03.336 12:38:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:18:03.336 { 00:18:03.336 "uuid": "d061b924-6770-4d1f-a6b1-c96dffb94beb", 00:18:03.336 "name": "lvs_0", 00:18:03.336 "base_bdev": "Nvme0n1", 00:18:03.336 "total_data_clusters": 1278, 00:18:03.336 "free_clusters": 1278, 00:18:03.336 "block_size": 4096, 00:18:03.336 "cluster_size": 4194304 00:18:03.336 } 00:18:03.336 ]' 00:18:03.336 12:38:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="d061b924-6770-4d1f-a6b1-c96dffb94beb") .free_clusters' 00:18:03.336 12:38:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1278 00:18:03.336 12:38:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="d061b924-6770-4d1f-a6b1-c96dffb94beb") .cluster_size' 00:18:03.336 5112 00:18:03.336 12:38:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:18:03.336 12:38:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5112 00:18:03.336 12:38:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5112 00:18:03.336 12:38:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:18:03.336 12:38:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d061b924-6770-4d1f-a6b1-c96dffb94beb lbd_0 5112 00:18:03.596 12:38:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=46473636-3e3e-484d-a900-e3f90c761644 00:18:03.596 12:38:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 46473636-3e3e-484d-a900-e3f90c761644 lvs_n_0 00:18:03.855 12:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=cdcc6266-133d-4041-a6e0-95581a23563f 00:18:03.855 12:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb cdcc6266-133d-4041-a6e0-95581a23563f 00:18:03.855 12:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=cdcc6266-133d-4041-a6e0-95581a23563f 00:18:03.855 12:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:18:03.855 12:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:18:03.855 12:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:18:03.855 12:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:04.116 12:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:18:04.116 { 00:18:04.116 "uuid": "d061b924-6770-4d1f-a6b1-c96dffb94beb", 00:18:04.116 "name": "lvs_0", 00:18:04.116 "base_bdev": "Nvme0n1", 00:18:04.116 "total_data_clusters": 1278, 00:18:04.116 "free_clusters": 0, 00:18:04.116 "block_size": 4096, 00:18:04.117 "cluster_size": 4194304 00:18:04.117 }, 00:18:04.117 { 00:18:04.117 "uuid": "cdcc6266-133d-4041-a6e0-95581a23563f", 00:18:04.117 "name": "lvs_n_0", 00:18:04.117 "base_bdev": "46473636-3e3e-484d-a900-e3f90c761644", 00:18:04.117 "total_data_clusters": 1276, 00:18:04.117 "free_clusters": 1276, 00:18:04.117 "block_size": 4096, 00:18:04.117 "cluster_size": 4194304 00:18:04.117 } 00:18:04.117 ]' 00:18:04.117 12:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="cdcc6266-133d-4041-a6e0-95581a23563f") .free_clusters' 00:18:04.376 12:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1276 00:18:04.376 12:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="cdcc6266-133d-4041-a6e0-95581a23563f") .cluster_size' 00:18:04.376 5104 00:18:04.376 12:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:18:04.376 12:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5104 00:18:04.376 12:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5104 00:18:04.376 12:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:18:04.376 12:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u cdcc6266-133d-4041-a6e0-95581a23563f lbd_nest_0 5104 00:18:04.638 12:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=d4ee62a2-38a3-4971-8b76-e4fee30e0c9a 00:18:04.639 12:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:04.901 12:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:18:04.901 12:38:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 d4ee62a2-38a3-4971-8b76-e4fee30e0c9a 00:18:05.161 12:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:05.420 12:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:18:05.420 12:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:18:05.420 12:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:18:05.420 12:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:05.420 12:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:05.678 Initializing NVMe Controllers 00:18:05.678 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:05.678 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:18:05.678 WARNING: Some requested NVMe devices were skipped 00:18:05.678 No valid NVMe controllers or AIO or URING devices found 00:18:05.678 12:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:05.678 12:38:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:17.891 Initializing NVMe Controllers 00:18:17.891 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:17.891 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:17.891 Initialization complete. Launching workers. 00:18:17.891 ======================================================== 00:18:17.891 Latency(us) 00:18:17.891 Device Information : IOPS MiB/s Average min max 00:18:17.891 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 954.20 119.27 1047.50 329.23 7790.88 00:18:17.891 ======================================================== 00:18:17.891 Total : 954.20 119.27 1047.50 329.23 7790.88 00:18:17.891 00:18:17.891 12:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:18:17.891 12:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:17.891 12:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:17.891 Initializing NVMe Controllers 00:18:17.891 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:17.891 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:18:17.891 WARNING: Some requested NVMe devices were skipped 00:18:17.891 No valid NVMe controllers or AIO or URING devices found 00:18:17.891 12:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:17.891 12:38:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:27.873 Initializing NVMe Controllers 00:18:27.873 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:27.873 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:27.873 Initialization complete. Launching workers. 00:18:27.873 ======================================================== 00:18:27.873 Latency(us) 00:18:27.873 Device Information : IOPS MiB/s Average min max 00:18:27.874 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1339.40 167.42 23911.49 5398.90 65916.71 00:18:27.874 ======================================================== 00:18:27.874 Total : 1339.40 167.42 23911.49 5398.90 65916.71 00:18:27.874 00:18:27.874 12:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:18:27.874 12:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:27.874 12:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:27.874 Initializing NVMe Controllers 00:18:27.874 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:27.874 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:18:27.874 WARNING: Some requested NVMe devices were skipped 00:18:27.874 No valid NVMe controllers or AIO or URING devices found 00:18:27.874 12:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:27.874 12:38:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:37.926 Initializing NVMe Controllers 00:18:37.926 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:37.926 Controller IO queue size 128, less than required. 00:18:37.926 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:37.926 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:37.926 Initialization complete. Launching workers. 00:18:37.926 ======================================================== 00:18:37.926 Latency(us) 00:18:37.926 Device Information : IOPS MiB/s Average min max 00:18:37.926 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4110.14 513.77 31197.22 11867.99 63541.63 00:18:37.926 ======================================================== 00:18:37.926 Total : 4110.14 513.77 31197.22 11867.99 63541.63 00:18:37.926 00:18:37.926 12:38:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:37.926 12:38:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete d4ee62a2-38a3-4971-8b76-e4fee30e0c9a 00:18:37.926 12:38:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:18:38.184 12:38:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 46473636-3e3e-484d-a900-e3f90c761644 00:18:38.441 12:38:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:18:38.700 12:38:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:18:38.700 12:38:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:18:38.700 12:38:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:38.700 12:38:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:18:38.700 12:38:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:38.700 12:38:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:18:38.700 12:38:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:38.700 12:38:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:38.700 rmmod nvme_tcp 00:18:38.700 rmmod nvme_fabrics 00:18:38.700 rmmod nvme_keyring 00:18:38.700 12:38:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:38.700 12:38:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:18:38.700 12:38:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:18:38.700 12:38:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@513 -- # '[' -n 89126 ']' 00:18:38.700 12:38:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # killprocess 89126 00:18:38.700 12:38:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 89126 ']' 00:18:38.700 12:38:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 89126 00:18:38.700 12:38:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:18:38.700 12:38:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:38.700 12:38:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89126 00:18:38.700 killing process with pid 89126 00:18:38.700 12:38:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:38.700 12:38:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:38.700 12:38:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89126' 00:18:38.700 12:38:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 89126 00:18:38.700 12:38:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 89126 00:18:40.079 12:38:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:40.079 12:38:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:40.079 12:38:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:40.079 12:38:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:18:40.079 12:38:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-save 00:18:40.079 12:38:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:40.079 12:38:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-restore 00:18:40.079 12:38:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:40.079 12:38:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:40.079 12:38:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:40.079 12:38:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:40.079 12:38:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:40.079 12:38:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:40.079 12:38:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:40.079 12:38:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:40.079 12:38:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:40.079 12:38:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:40.079 12:38:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:40.339 12:38:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:40.339 12:38:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:40.339 12:38:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:40.339 12:38:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:40.339 12:38:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:40.339 12:38:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.339 12:38:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:40.339 12:38:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.339 12:38:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:18:40.339 00:18:40.339 real 0m50.718s 00:18:40.339 user 3m9.803s 00:18:40.339 sys 0m12.192s 00:18:40.339 ************************************ 00:18:40.339 END TEST nvmf_perf 00:18:40.339 ************************************ 00:18:40.339 12:38:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:40.339 12:38:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:40.339 12:38:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:40.339 12:38:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:40.339 12:38:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:40.339 12:38:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.339 ************************************ 00:18:40.339 START TEST nvmf_fio_host 00:18:40.339 ************************************ 00:18:40.339 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:40.600 * Looking for test storage... 00:18:40.600 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:40.600 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:40.600 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:18:40.600 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:40.600 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:40.600 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:40.600 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:40.600 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:40.600 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:18:40.600 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:18:40.600 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:18:40.600 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:18:40.600 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:18:40.600 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:18:40.600 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:18:40.600 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:40.600 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:18:40.600 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:18:40.600 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:40.600 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:40.600 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:18:40.600 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:18:40.600 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:40.600 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:18:40.600 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:40.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.601 --rc genhtml_branch_coverage=1 00:18:40.601 --rc genhtml_function_coverage=1 00:18:40.601 --rc genhtml_legend=1 00:18:40.601 --rc geninfo_all_blocks=1 00:18:40.601 --rc geninfo_unexecuted_blocks=1 00:18:40.601 00:18:40.601 ' 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:40.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.601 --rc genhtml_branch_coverage=1 00:18:40.601 --rc genhtml_function_coverage=1 00:18:40.601 --rc genhtml_legend=1 00:18:40.601 --rc geninfo_all_blocks=1 00:18:40.601 --rc geninfo_unexecuted_blocks=1 00:18:40.601 00:18:40.601 ' 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:40.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.601 --rc genhtml_branch_coverage=1 00:18:40.601 --rc genhtml_function_coverage=1 00:18:40.601 --rc genhtml_legend=1 00:18:40.601 --rc geninfo_all_blocks=1 00:18:40.601 --rc geninfo_unexecuted_blocks=1 00:18:40.601 00:18:40.601 ' 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:40.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.601 --rc genhtml_branch_coverage=1 00:18:40.601 --rc genhtml_function_coverage=1 00:18:40.601 --rc genhtml_legend=1 00:18:40.601 --rc geninfo_all_blocks=1 00:18:40.601 --rc geninfo_unexecuted_blocks=1 00:18:40.601 00:18:40.601 ' 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:40.601 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:40.601 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:18:40.602 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:40.602 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:40.602 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:40.602 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:40.602 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:40.602 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.602 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:40.602 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.602 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:18:40.602 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:18:40.602 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:18:40.602 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:18:40.602 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:18:40.602 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@456 -- # nvmf_veth_init 00:18:40.602 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:40.602 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:40.602 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:40.602 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:40.602 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:40.602 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:40.602 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:40.602 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:40.602 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:40.602 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:40.602 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:40.602 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:40.602 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:40.602 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:40.602 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:40.602 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:40.602 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:40.602 Cannot find device "nvmf_init_br" 00:18:40.602 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:18:40.602 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:40.602 Cannot find device "nvmf_init_br2" 00:18:40.602 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:18:40.602 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:40.602 Cannot find device "nvmf_tgt_br" 00:18:40.602 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:18:40.602 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:40.602 Cannot find device "nvmf_tgt_br2" 00:18:40.602 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:18:40.602 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:40.602 Cannot find device "nvmf_init_br" 00:18:40.602 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:18:40.602 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:40.602 Cannot find device "nvmf_init_br2" 00:18:40.602 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:18:40.602 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:40.861 Cannot find device "nvmf_tgt_br" 00:18:40.861 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:18:40.861 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:40.861 Cannot find device "nvmf_tgt_br2" 00:18:40.861 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:18:40.861 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:40.861 Cannot find device "nvmf_br" 00:18:40.861 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:18:40.861 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:40.861 Cannot find device "nvmf_init_if" 00:18:40.861 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:18:40.861 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:40.861 Cannot find device "nvmf_init_if2" 00:18:40.861 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:18:40.861 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:40.861 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:40.861 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:18:40.861 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:40.861 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:40.861 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:18:40.861 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:40.861 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:40.861 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:40.861 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:40.862 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:40.862 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:40.862 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:40.862 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:40.862 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:40.862 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:40.862 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:40.862 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:40.862 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:40.862 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:40.862 12:38:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:40.862 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:40.862 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:40.862 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:40.862 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:40.862 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:40.862 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:40.862 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:40.862 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:40.862 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:40.862 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:40.862 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:40.862 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:40.862 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:40.862 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:40.862 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:40.862 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:40.862 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:40.862 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:40.862 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:40.862 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:18:40.862 00:18:40.862 --- 10.0.0.3 ping statistics --- 00:18:40.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.862 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:18:40.862 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:40.862 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:40.862 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:18:40.862 00:18:40.862 --- 10.0.0.4 ping statistics --- 00:18:40.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.862 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:18:40.862 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:41.120 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:41.120 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:18:41.120 00:18:41.120 --- 10.0.0.1 ping statistics --- 00:18:41.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.120 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:18:41.120 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:41.121 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:41.121 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:18:41.121 00:18:41.121 --- 10.0.0.2 ping statistics --- 00:18:41.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.121 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:18:41.121 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:41.121 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@457 -- # return 0 00:18:41.121 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:41.121 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:41.121 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:41.121 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:41.121 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:41.121 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:41.121 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:41.121 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:18:41.121 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:18:41.121 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:41.121 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.121 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=90016 00:18:41.121 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:41.121 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:41.121 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 90016 00:18:41.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:41.121 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 90016 ']' 00:18:41.121 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.121 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:41.121 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.121 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:41.121 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.121 [2024-11-19 12:38:46.218203] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:41.121 [2024-11-19 12:38:46.218481] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:41.121 [2024-11-19 12:38:46.355562] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:41.379 [2024-11-19 12:38:46.389695] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:41.379 [2024-11-19 12:38:46.389745] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:41.379 [2024-11-19 12:38:46.389771] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:41.379 [2024-11-19 12:38:46.389779] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:41.379 [2024-11-19 12:38:46.389785] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:41.379 [2024-11-19 12:38:46.389930] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:41.379 [2024-11-19 12:38:46.390587] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:41.379 [2024-11-19 12:38:46.390715] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:41.379 [2024-11-19 12:38:46.390722] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.379 [2024-11-19 12:38:46.419827] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:41.379 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:41.379 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:18:41.379 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:41.638 [2024-11-19 12:38:46.777917] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:41.638 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:18:41.638 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:41.638 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.638 12:38:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:41.896 Malloc1 00:18:41.896 12:38:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:42.462 12:38:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:42.462 12:38:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:42.721 [2024-11-19 12:38:47.960639] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:42.980 12:38:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:18:42.980 12:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:18:42.980 12:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:42.980 12:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:42.980 12:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:42.980 12:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:42.980 12:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:42.980 12:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:42.980 12:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:18:42.980 12:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:42.980 12:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:42.980 12:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:18:42.980 12:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:42.980 12:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:43.239 12:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:43.239 12:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:43.239 12:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:43.239 12:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:43.239 12:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:18:43.239 12:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:43.239 12:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:43.239 12:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:43.239 12:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:43.239 12:38:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:43.239 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:43.239 fio-3.35 00:18:43.239 Starting 1 thread 00:18:45.772 00:18:45.772 test: (groupid=0, jobs=1): err= 0: pid=90086: Tue Nov 19 12:38:50 2024 00:18:45.772 read: IOPS=8764, BW=34.2MiB/s (35.9MB/s)(68.7MiB/2007msec) 00:18:45.772 slat (nsec): min=1912, max=319542, avg=2498.21, stdev=3362.69 00:18:45.772 clat (usec): min=2546, max=12927, avg=7597.60, stdev=562.87 00:18:45.772 lat (usec): min=2604, max=12929, avg=7600.10, stdev=562.61 00:18:45.772 clat percentiles (usec): 00:18:45.772 | 1.00th=[ 6390], 5.00th=[ 6783], 10.00th=[ 6915], 20.00th=[ 7177], 00:18:45.772 | 30.00th=[ 7308], 40.00th=[ 7439], 50.00th=[ 7570], 60.00th=[ 7701], 00:18:45.772 | 70.00th=[ 7898], 80.00th=[ 8029], 90.00th=[ 8225], 95.00th=[ 8455], 00:18:45.772 | 99.00th=[ 8979], 99.50th=[ 9110], 99.90th=[10945], 99.95th=[12387], 00:18:45.772 | 99.99th=[12911] 00:18:45.772 bw ( KiB/s): min=33208, max=36584, per=100.00%, avg=35074.00, stdev=1459.62, samples=4 00:18:45.772 iops : min= 8302, max= 9146, avg=8768.50, stdev=364.91, samples=4 00:18:45.772 write: IOPS=8771, BW=34.3MiB/s (35.9MB/s)(68.8MiB/2007msec); 0 zone resets 00:18:45.772 slat (nsec): min=1985, max=253818, avg=2556.46, stdev=2492.13 00:18:45.772 clat (usec): min=2403, max=13008, avg=6933.90, stdev=533.25 00:18:45.772 lat (usec): min=2417, max=13010, avg=6936.46, stdev=533.10 00:18:45.772 clat percentiles (usec): 00:18:45.772 | 1.00th=[ 5866], 5.00th=[ 6194], 10.00th=[ 6325], 20.00th=[ 6521], 00:18:45.772 | 30.00th=[ 6652], 40.00th=[ 6783], 50.00th=[ 6915], 60.00th=[ 7046], 00:18:45.772 | 70.00th=[ 7177], 80.00th=[ 7308], 90.00th=[ 7504], 95.00th=[ 7701], 00:18:45.772 | 99.00th=[ 8160], 99.50th=[ 8455], 99.90th=[11600], 99.95th=[12649], 00:18:45.772 | 99.99th=[13042] 00:18:45.772 bw ( KiB/s): min=34184, max=35648, per=99.93%, avg=35062.00, stdev=698.63, samples=4 00:18:45.772 iops : min= 8546, max= 8912, avg=8765.50, stdev=174.66, samples=4 00:18:45.772 lat (msec) : 4=0.08%, 10=99.75%, 20=0.17% 00:18:45.772 cpu : usr=70.54%, sys=22.48%, ctx=5, majf=0, minf=8 00:18:45.772 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:45.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.772 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:45.772 issued rwts: total=17590,17605,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.772 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:45.772 00:18:45.772 Run status group 0 (all jobs): 00:18:45.772 READ: bw=34.2MiB/s (35.9MB/s), 34.2MiB/s-34.2MiB/s (35.9MB/s-35.9MB/s), io=68.7MiB (72.0MB), run=2007-2007msec 00:18:45.772 WRITE: bw=34.3MiB/s (35.9MB/s), 34.3MiB/s-34.3MiB/s (35.9MB/s-35.9MB/s), io=68.8MiB (72.1MB), run=2007-2007msec 00:18:45.772 12:38:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:18:45.772 12:38:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:18:45.772 12:38:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:45.772 12:38:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:45.772 12:38:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:45.772 12:38:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:45.772 12:38:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:18:45.772 12:38:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:45.772 12:38:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:45.772 12:38:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:45.772 12:38:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:45.772 12:38:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:18:45.772 12:38:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:45.772 12:38:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:45.772 12:38:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:45.772 12:38:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:45.772 12:38:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:45.772 12:38:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:18:45.772 12:38:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:45.772 12:38:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:45.772 12:38:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:45.772 12:38:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:18:45.772 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:18:45.772 fio-3.35 00:18:45.772 Starting 1 thread 00:18:48.307 00:18:48.307 test: (groupid=0, jobs=1): err= 0: pid=90130: Tue Nov 19 12:38:53 2024 00:18:48.307 read: IOPS=8387, BW=131MiB/s (137MB/s)(263MiB/2008msec) 00:18:48.307 slat (usec): min=2, max=118, avg= 3.67, stdev= 2.26 00:18:48.307 clat (usec): min=1789, max=19430, avg=8588.19, stdev=2595.60 00:18:48.307 lat (usec): min=1793, max=19433, avg=8591.86, stdev=2595.66 00:18:48.307 clat percentiles (usec): 00:18:48.307 | 1.00th=[ 3884], 5.00th=[ 4752], 10.00th=[ 5342], 20.00th=[ 6259], 00:18:48.307 | 30.00th=[ 7046], 40.00th=[ 7767], 50.00th=[ 8455], 60.00th=[ 8979], 00:18:48.307 | 70.00th=[ 9896], 80.00th=[10814], 90.00th=[11731], 95.00th=[12911], 00:18:48.307 | 99.00th=[15926], 99.50th=[17957], 99.90th=[19006], 99.95th=[19268], 00:18:48.307 | 99.99th=[19530] 00:18:48.307 bw ( KiB/s): min=58880, max=79776, per=50.93%, avg=68352.00, stdev=10115.07, samples=4 00:18:48.307 iops : min= 3680, max= 4986, avg=4272.00, stdev=632.19, samples=4 00:18:48.307 write: IOPS=4966, BW=77.6MiB/s (81.4MB/s)(140MiB/1801msec); 0 zone resets 00:18:48.307 slat (usec): min=32, max=343, avg=37.87, stdev= 8.39 00:18:48.307 clat (usec): min=2991, max=20495, avg=11778.19, stdev=2100.79 00:18:48.307 lat (usec): min=3024, max=20546, avg=11816.06, stdev=2100.65 00:18:48.307 clat percentiles (usec): 00:18:48.307 | 1.00th=[ 7701], 5.00th=[ 8717], 10.00th=[ 9372], 20.00th=[10159], 00:18:48.307 | 30.00th=[10552], 40.00th=[11076], 50.00th=[11469], 60.00th=[12125], 00:18:48.307 | 70.00th=[12649], 80.00th=[13566], 90.00th=[14746], 95.00th=[15664], 00:18:48.307 | 99.00th=[17171], 99.50th=[17433], 99.90th=[18744], 99.95th=[19792], 00:18:48.307 | 99.99th=[20579] 00:18:48.307 bw ( KiB/s): min=62976, max=81600, per=89.52%, avg=71136.00, stdev=9515.79, samples=4 00:18:48.307 iops : min= 3936, max= 5100, avg=4446.00, stdev=594.74, samples=4 00:18:48.307 lat (msec) : 2=0.02%, 4=0.86%, 10=51.90%, 20=47.22%, 50=0.01% 00:18:48.307 cpu : usr=82.61%, sys=13.30%, ctx=5, majf=0, minf=4 00:18:48.307 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:48.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.307 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:48.307 issued rwts: total=16842,8945,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:48.307 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:48.307 00:18:48.307 Run status group 0 (all jobs): 00:18:48.307 READ: bw=131MiB/s (137MB/s), 131MiB/s-131MiB/s (137MB/s-137MB/s), io=263MiB (276MB), run=2008-2008msec 00:18:48.307 WRITE: bw=77.6MiB/s (81.4MB/s), 77.6MiB/s-77.6MiB/s (81.4MB/s-81.4MB/s), io=140MiB (147MB), run=1801-1801msec 00:18:48.307 12:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:48.307 12:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:18:48.307 12:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:18:48.307 12:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:18:48.307 12:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # bdfs=() 00:18:48.308 12:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # local bdfs 00:18:48.308 12:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:18:48.308 12:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:48.308 12:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:18:48.308 12:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:18:48.308 12:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:18:48.308 12:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.3 00:18:48.875 Nvme0n1 00:18:48.875 12:38:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:18:49.133 12:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=213037bb-123e-4214-91e1-389094278e41 00:18:49.134 12:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 213037bb-123e-4214-91e1-389094278e41 00:18:49.134 12:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=213037bb-123e-4214-91e1-389094278e41 00:18:49.134 12:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:18:49.134 12:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:18:49.134 12:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:18:49.134 12:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:49.392 12:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:18:49.392 { 00:18:49.392 "uuid": "213037bb-123e-4214-91e1-389094278e41", 00:18:49.392 "name": "lvs_0", 00:18:49.392 "base_bdev": "Nvme0n1", 00:18:49.392 "total_data_clusters": 4, 00:18:49.392 "free_clusters": 4, 00:18:49.392 "block_size": 4096, 00:18:49.392 "cluster_size": 1073741824 00:18:49.392 } 00:18:49.392 ]' 00:18:49.392 12:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="213037bb-123e-4214-91e1-389094278e41") .free_clusters' 00:18:49.392 12:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=4 00:18:49.392 12:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="213037bb-123e-4214-91e1-389094278e41") .cluster_size' 00:18:49.392 4096 00:18:49.392 12:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:18:49.392 12:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4096 00:18:49.392 12:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4096 00:18:49.392 12:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:18:49.651 c807612c-b3dd-4811-87b6-fbde0519399a 00:18:49.651 12:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:18:49.910 12:38:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:18:50.169 12:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:18:50.428 12:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:50.428 12:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:50.428 12:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:50.428 12:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:50.428 12:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:50.428 12:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:50.428 12:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:18:50.428 12:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:50.428 12:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:50.428 12:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:50.428 12:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:18:50.428 12:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:50.428 12:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:50.428 12:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:50.428 12:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:50.428 12:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:18:50.428 12:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:50.428 12:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:50.428 12:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:50.428 12:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:50.428 12:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:50.428 12:38:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:50.428 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:50.428 fio-3.35 00:18:50.428 Starting 1 thread 00:18:52.962 00:18:52.962 test: (groupid=0, jobs=1): err= 0: pid=90239: Tue Nov 19 12:38:57 2024 00:18:52.962 read: IOPS=6084, BW=23.8MiB/s (24.9MB/s)(47.7MiB/2008msec) 00:18:52.962 slat (nsec): min=1923, max=334888, avg=2778.27, stdev=4087.26 00:18:52.962 clat (usec): min=3038, max=19287, avg=11015.18, stdev=903.16 00:18:52.962 lat (usec): min=3046, max=19289, avg=11017.96, stdev=902.80 00:18:52.962 clat percentiles (usec): 00:18:52.962 | 1.00th=[ 9110], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10290], 00:18:52.962 | 30.00th=[10552], 40.00th=[10814], 50.00th=[10945], 60.00th=[11207], 00:18:52.962 | 70.00th=[11469], 80.00th=[11731], 90.00th=[11994], 95.00th=[12387], 00:18:52.962 | 99.00th=[12911], 99.50th=[13435], 99.90th=[17433], 99.95th=[17695], 00:18:52.962 | 99.99th=[19268] 00:18:52.962 bw ( KiB/s): min=23152, max=24824, per=99.80%, avg=24290.00, stdev=766.95, samples=4 00:18:52.962 iops : min= 5788, max= 6206, avg=6072.50, stdev=191.74, samples=4 00:18:52.962 write: IOPS=6059, BW=23.7MiB/s (24.8MB/s)(47.5MiB/2008msec); 0 zone resets 00:18:52.962 slat (usec): min=2, max=238, avg= 2.85, stdev= 2.82 00:18:52.962 clat (usec): min=2402, max=17736, avg=9982.41, stdev=848.18 00:18:52.962 lat (usec): min=2415, max=17738, avg=9985.26, stdev=848.05 00:18:52.962 clat percentiles (usec): 00:18:52.962 | 1.00th=[ 8160], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9372], 00:18:52.962 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10159], 00:18:52.962 | 70.00th=[10421], 80.00th=[10683], 90.00th=[10945], 95.00th=[11207], 00:18:52.962 | 99.00th=[11863], 99.50th=[12125], 99.90th=[15926], 99.95th=[16319], 00:18:52.962 | 99.99th=[17695] 00:18:52.962 bw ( KiB/s): min=24136, max=24320, per=99.95%, avg=24226.00, stdev=79.57, samples=4 00:18:52.962 iops : min= 6034, max= 6080, avg=6056.50, stdev=19.89, samples=4 00:18:52.962 lat (msec) : 4=0.06%, 10=30.56%, 20=69.38% 00:18:52.962 cpu : usr=73.04%, sys=21.62%, ctx=19, majf=0, minf=8 00:18:52.962 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:18:52.962 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.962 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:52.962 issued rwts: total=12218,12167,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:52.963 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:52.963 00:18:52.963 Run status group 0 (all jobs): 00:18:52.963 READ: bw=23.8MiB/s (24.9MB/s), 23.8MiB/s-23.8MiB/s (24.9MB/s-24.9MB/s), io=47.7MiB (50.0MB), run=2008-2008msec 00:18:52.963 WRITE: bw=23.7MiB/s (24.8MB/s), 23.7MiB/s-23.7MiB/s (24.8MB/s-24.8MB/s), io=47.5MiB (49.8MB), run=2008-2008msec 00:18:52.963 12:38:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:18:53.221 12:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:18:53.480 12:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=8b331b22-5a29-40c2-a39b-f4df340a03ff 00:18:53.480 12:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 8b331b22-5a29-40c2-a39b-f4df340a03ff 00:18:53.480 12:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=8b331b22-5a29-40c2-a39b-f4df340a03ff 00:18:53.480 12:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:18:53.480 12:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:18:53.480 12:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:18:53.480 12:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:53.738 12:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:18:53.738 { 00:18:53.738 "uuid": "213037bb-123e-4214-91e1-389094278e41", 00:18:53.738 "name": "lvs_0", 00:18:53.738 "base_bdev": "Nvme0n1", 00:18:53.738 "total_data_clusters": 4, 00:18:53.738 "free_clusters": 0, 00:18:53.738 "block_size": 4096, 00:18:53.738 "cluster_size": 1073741824 00:18:53.738 }, 00:18:53.738 { 00:18:53.738 "uuid": "8b331b22-5a29-40c2-a39b-f4df340a03ff", 00:18:53.738 "name": "lvs_n_0", 00:18:53.738 "base_bdev": "c807612c-b3dd-4811-87b6-fbde0519399a", 00:18:53.738 "total_data_clusters": 1022, 00:18:53.738 "free_clusters": 1022, 00:18:53.738 "block_size": 4096, 00:18:53.738 "cluster_size": 4194304 00:18:53.738 } 00:18:53.738 ]' 00:18:53.738 12:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="8b331b22-5a29-40c2-a39b-f4df340a03ff") .free_clusters' 00:18:53.738 12:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=1022 00:18:53.738 12:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="8b331b22-5a29-40c2-a39b-f4df340a03ff") .cluster_size' 00:18:53.738 4088 00:18:53.738 12:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:18:53.738 12:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4088 00:18:53.738 12:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4088 00:18:53.738 12:38:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:18:53.999 5ecb0b0d-e0ed-4f57-a495-d18a246a24b3 00:18:54.259 12:38:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:18:54.517 12:38:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:18:54.776 12:38:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:18:55.034 12:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:55.034 12:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:55.034 12:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:55.034 12:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:55.034 12:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:55.034 12:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:55.034 12:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:18:55.034 12:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:55.034 12:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:55.034 12:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:55.034 12:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:18:55.034 12:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:55.035 12:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:55.035 12:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:55.035 12:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:55.035 12:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:55.035 12:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:18:55.035 12:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:55.035 12:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:55.035 12:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:55.035 12:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:55.035 12:39:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:55.035 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:55.035 fio-3.35 00:18:55.035 Starting 1 thread 00:18:57.592 00:18:57.592 test: (groupid=0, jobs=1): err= 0: pid=90324: Tue Nov 19 12:39:02 2024 00:18:57.592 read: IOPS=5468, BW=21.4MiB/s (22.4MB/s)(42.9MiB/2010msec) 00:18:57.592 slat (usec): min=2, max=324, avg= 2.80, stdev= 4.28 00:18:57.592 clat (usec): min=3323, max=22177, avg=12266.25, stdev=1026.07 00:18:57.592 lat (usec): min=3332, max=22181, avg=12269.06, stdev=1025.74 00:18:57.592 clat percentiles (usec): 00:18:57.592 | 1.00th=[10028], 5.00th=[10814], 10.00th=[11076], 20.00th=[11469], 00:18:57.592 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12256], 60.00th=[12518], 00:18:57.592 | 70.00th=[12780], 80.00th=[13042], 90.00th=[13435], 95.00th=[13829], 00:18:57.592 | 99.00th=[14484], 99.50th=[15008], 99.90th=[19530], 99.95th=[19792], 00:18:57.592 | 99.99th=[22152] 00:18:57.592 bw ( KiB/s): min=20880, max=22264, per=99.81%, avg=21832.00, stdev=642.23, samples=4 00:18:57.592 iops : min= 5220, max= 5566, avg=5458.00, stdev=160.56, samples=4 00:18:57.592 write: IOPS=5439, BW=21.2MiB/s (22.3MB/s)(42.7MiB/2010msec); 0 zone resets 00:18:57.592 slat (usec): min=2, max=298, avg= 2.88, stdev= 3.28 00:18:57.592 clat (usec): min=2484, max=20853, avg=11087.74, stdev=962.12 00:18:57.592 lat (usec): min=2498, max=20855, avg=11090.62, stdev=962.01 00:18:57.592 clat percentiles (usec): 00:18:57.592 | 1.00th=[ 9110], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10421], 00:18:57.592 | 30.00th=[10683], 40.00th=[10814], 50.00th=[11076], 60.00th=[11338], 00:18:57.592 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12125], 95.00th=[12518], 00:18:57.592 | 99.00th=[13173], 99.50th=[13435], 99.90th=[19268], 99.95th=[19530], 00:18:57.592 | 99.99th=[20841] 00:18:57.592 bw ( KiB/s): min=21376, max=22208, per=100.00%, avg=21762.00, stdev=351.05, samples=4 00:18:57.592 iops : min= 5344, max= 5552, avg=5440.50, stdev=87.76, samples=4 00:18:57.592 lat (msec) : 4=0.05%, 10=5.28%, 20=94.64%, 50=0.03% 00:18:57.592 cpu : usr=74.81%, sys=20.31%, ctx=7, majf=0, minf=8 00:18:57.592 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:18:57.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.592 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:57.592 issued rwts: total=10991,10933,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.592 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:57.592 00:18:57.592 Run status group 0 (all jobs): 00:18:57.592 READ: bw=21.4MiB/s (22.4MB/s), 21.4MiB/s-21.4MiB/s (22.4MB/s-22.4MB/s), io=42.9MiB (45.0MB), run=2010-2010msec 00:18:57.592 WRITE: bw=21.2MiB/s (22.3MB/s), 21.2MiB/s-21.2MiB/s (22.3MB/s-22.3MB/s), io=42.7MiB (44.8MB), run=2010-2010msec 00:18:57.592 12:39:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:18:57.870 12:39:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:18:57.870 12:39:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:18:58.128 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:18:58.387 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:18:58.645 12:39:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:18:58.904 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:18:59.163 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:18:59.163 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:18:59.163 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:18:59.163 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:59.163 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:18:59.163 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:59.163 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:18:59.163 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:59.163 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:59.163 rmmod nvme_tcp 00:18:59.421 rmmod nvme_fabrics 00:18:59.421 rmmod nvme_keyring 00:18:59.421 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:59.421 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:18:59.421 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:18:59.421 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@513 -- # '[' -n 90016 ']' 00:18:59.421 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # killprocess 90016 00:18:59.421 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 90016 ']' 00:18:59.421 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 90016 00:18:59.421 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:18:59.421 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:59.421 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90016 00:18:59.421 killing process with pid 90016 00:18:59.421 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:59.421 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:59.421 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90016' 00:18:59.421 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 90016 00:18:59.421 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 90016 00:18:59.421 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:59.421 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:59.421 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:59.421 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:18:59.421 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-save 00:18:59.421 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:59.421 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-restore 00:18:59.421 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:59.421 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:59.421 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:59.421 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:59.421 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:59.680 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:59.680 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:59.680 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:59.680 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:59.680 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:59.680 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:59.680 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:59.680 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:59.680 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:59.680 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:59.680 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:59.680 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:59.680 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:59.680 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.680 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:18:59.680 ************************************ 00:18:59.680 END TEST nvmf_fio_host 00:18:59.680 ************************************ 00:18:59.680 00:18:59.680 real 0m19.340s 00:18:59.680 user 1m25.097s 00:18:59.680 sys 0m4.388s 00:18:59.680 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:59.680 12:39:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.680 12:39:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:18:59.680 12:39:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:59.680 12:39:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:59.680 12:39:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.680 ************************************ 00:18:59.680 START TEST nvmf_failover 00:18:59.680 ************************************ 00:18:59.680 12:39:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:18:59.939 * Looking for test storage... 00:18:59.939 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:59.939 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:59.939 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:18:59.939 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:59.939 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:59.939 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:59.939 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:59.939 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:59.939 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:18:59.939 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:18:59.939 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:18:59.939 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:18:59.939 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:18:59.939 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:18:59.939 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:18:59.939 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:59.939 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:18:59.939 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:18:59.939 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:59.939 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:59.939 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:18:59.939 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:18:59.939 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:59.939 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:59.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.940 --rc genhtml_branch_coverage=1 00:18:59.940 --rc genhtml_function_coverage=1 00:18:59.940 --rc genhtml_legend=1 00:18:59.940 --rc geninfo_all_blocks=1 00:18:59.940 --rc geninfo_unexecuted_blocks=1 00:18:59.940 00:18:59.940 ' 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:59.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.940 --rc genhtml_branch_coverage=1 00:18:59.940 --rc genhtml_function_coverage=1 00:18:59.940 --rc genhtml_legend=1 00:18:59.940 --rc geninfo_all_blocks=1 00:18:59.940 --rc geninfo_unexecuted_blocks=1 00:18:59.940 00:18:59.940 ' 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:59.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.940 --rc genhtml_branch_coverage=1 00:18:59.940 --rc genhtml_function_coverage=1 00:18:59.940 --rc genhtml_legend=1 00:18:59.940 --rc geninfo_all_blocks=1 00:18:59.940 --rc geninfo_unexecuted_blocks=1 00:18:59.940 00:18:59.940 ' 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:59.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.940 --rc genhtml_branch_coverage=1 00:18:59.940 --rc genhtml_function_coverage=1 00:18:59.940 --rc genhtml_legend=1 00:18:59.940 --rc geninfo_all_blocks=1 00:18:59.940 --rc geninfo_unexecuted_blocks=1 00:18:59.940 00:18:59.940 ' 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:59.940 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@456 -- # nvmf_veth_init 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:59.940 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:59.941 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:59.941 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:59.941 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:59.941 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:59.941 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:59.941 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:59.941 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:59.941 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:59.941 Cannot find device "nvmf_init_br" 00:18:59.941 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:18:59.941 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:59.941 Cannot find device "nvmf_init_br2" 00:18:59.941 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:18:59.941 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:59.941 Cannot find device "nvmf_tgt_br" 00:18:59.941 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:18:59.941 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:00.199 Cannot find device "nvmf_tgt_br2" 00:19:00.199 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:19:00.199 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:00.199 Cannot find device "nvmf_init_br" 00:19:00.199 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:19:00.199 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:00.199 Cannot find device "nvmf_init_br2" 00:19:00.199 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:19:00.199 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:00.199 Cannot find device "nvmf_tgt_br" 00:19:00.199 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:19:00.199 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:00.199 Cannot find device "nvmf_tgt_br2" 00:19:00.199 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:19:00.199 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:00.199 Cannot find device "nvmf_br" 00:19:00.199 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:19:00.199 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:00.199 Cannot find device "nvmf_init_if" 00:19:00.199 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:19:00.199 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:00.199 Cannot find device "nvmf_init_if2" 00:19:00.199 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:19:00.199 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:00.199 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:00.199 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:19:00.199 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:00.199 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:00.199 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:19:00.199 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:00.199 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:00.199 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:00.199 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:00.199 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:00.199 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:00.199 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:00.199 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:00.199 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:00.199 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:00.199 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:00.199 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:00.199 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:00.199 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:00.199 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:00.199 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:00.199 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:00.199 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:00.199 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:00.199 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:00.199 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:00.199 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:00.199 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:00.199 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:00.458 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:00.458 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:00.458 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:00.458 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:00.458 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:00.458 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:00.458 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:00.458 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:00.458 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:00.458 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:00.458 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:19:00.458 00:19:00.458 --- 10.0.0.3 ping statistics --- 00:19:00.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.458 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:19:00.458 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:00.458 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:00.458 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:19:00.458 00:19:00.458 --- 10.0.0.4 ping statistics --- 00:19:00.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.458 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:19:00.458 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:00.458 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:00.458 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:19:00.458 00:19:00.458 --- 10.0.0.1 ping statistics --- 00:19:00.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.458 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:19:00.458 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:00.458 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:00.458 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:19:00.458 00:19:00.458 --- 10.0.0.2 ping statistics --- 00:19:00.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.458 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:19:00.458 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:00.458 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@457 -- # return 0 00:19:00.458 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:19:00.458 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:00.458 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:19:00.458 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:19:00.458 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:00.458 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:19:00.458 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:19:00.458 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:19:00.458 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:00.458 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:00.458 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:00.458 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # nvmfpid=90619 00:19:00.458 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # waitforlisten 90619 00:19:00.458 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:00.458 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 90619 ']' 00:19:00.458 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.459 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:00.459 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.459 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:00.459 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:00.459 [2024-11-19 12:39:05.617588] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:19:00.459 [2024-11-19 12:39:05.617919] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:00.718 [2024-11-19 12:39:05.760702] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:00.718 [2024-11-19 12:39:05.794012] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:00.718 [2024-11-19 12:39:05.794074] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:00.718 [2024-11-19 12:39:05.794083] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:00.718 [2024-11-19 12:39:05.794090] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:00.718 [2024-11-19 12:39:05.794095] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:00.718 [2024-11-19 12:39:05.794236] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:00.718 [2024-11-19 12:39:05.794368] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:19:00.718 [2024-11-19 12:39:05.794926] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:00.718 [2024-11-19 12:39:05.823106] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:00.718 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:00.718 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:19:00.718 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:00.718 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:00.718 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:00.718 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:00.718 12:39:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:00.976 [2024-11-19 12:39:06.184216] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:00.976 12:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:01.544 Malloc0 00:19:01.544 12:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:01.802 12:39:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:02.061 12:39:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:02.320 [2024-11-19 12:39:07.342170] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:02.320 12:39:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:02.579 [2024-11-19 12:39:07.590353] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:02.579 12:39:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:19:02.838 [2024-11-19 12:39:07.882594] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:19:02.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:02.838 12:39:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=90669 00:19:02.838 12:39:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:19:02.838 12:39:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:02.838 12:39:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 90669 /var/tmp/bdevperf.sock 00:19:02.838 12:39:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 90669 ']' 00:19:02.838 12:39:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:02.838 12:39:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:02.838 12:39:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:02.838 12:39:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:02.838 12:39:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:03.097 12:39:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:03.097 12:39:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:19:03.097 12:39:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:03.355 NVMe0n1 00:19:03.356 12:39:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:03.615 00:19:03.615 12:39:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=90685 00:19:03.615 12:39:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:03.615 12:39:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:19:04.989 12:39:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:04.989 12:39:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:19:08.278 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:08.538 00:19:08.538 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:08.798 12:39:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:19:12.088 12:39:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:12.088 [2024-11-19 12:39:17.140832] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:12.088 12:39:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:19:13.026 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:19:13.285 12:39:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 90685 00:19:19.858 { 00:19:19.858 "results": [ 00:19:19.858 { 00:19:19.858 "job": "NVMe0n1", 00:19:19.858 "core_mask": "0x1", 00:19:19.858 "workload": "verify", 00:19:19.858 "status": "finished", 00:19:19.858 "verify_range": { 00:19:19.858 "start": 0, 00:19:19.858 "length": 16384 00:19:19.858 }, 00:19:19.858 "queue_depth": 128, 00:19:19.858 "io_size": 4096, 00:19:19.858 "runtime": 15.010526, 00:19:19.858 "iops": 9550.631336969804, 00:19:19.858 "mibps": 37.307153660038296, 00:19:19.858 "io_failed": 3245, 00:19:19.858 "io_timeout": 0, 00:19:19.858 "avg_latency_us": 13075.340513426616, 00:19:19.858 "min_latency_us": 539.9272727272727, 00:19:19.858 "max_latency_us": 15847.796363636364 00:19:19.858 } 00:19:19.858 ], 00:19:19.858 "core_count": 1 00:19:19.858 } 00:19:19.858 12:39:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 90669 00:19:19.858 12:39:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 90669 ']' 00:19:19.858 12:39:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 90669 00:19:19.858 12:39:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:19:19.858 12:39:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:19.858 12:39:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90669 00:19:19.858 killing process with pid 90669 00:19:19.858 12:39:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:19.858 12:39:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:19.858 12:39:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90669' 00:19:19.858 12:39:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 90669 00:19:19.858 12:39:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 90669 00:19:19.858 12:39:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:19.858 [2024-11-19 12:39:07.950849] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:19:19.858 [2024-11-19 12:39:07.950953] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90669 ] 00:19:19.858 [2024-11-19 12:39:08.085238] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.858 [2024-11-19 12:39:08.119056] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.858 [2024-11-19 12:39:08.146865] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:19.858 Running I/O for 15 seconds... 00:19:19.858 7204.00 IOPS, 28.14 MiB/s [2024-11-19T12:39:25.118Z] [2024-11-19 12:39:10.121034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:68768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.858 [2024-11-19 12:39:10.121129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.858 [2024-11-19 12:39:10.121173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.858 [2024-11-19 12:39:10.121188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.858 [2024-11-19 12:39:10.121202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:68904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.858 [2024-11-19 12:39:10.121215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.858 [2024-11-19 12:39:10.121228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.858 [2024-11-19 12:39:10.121240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.858 [2024-11-19 12:39:10.121253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:68920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.858 [2024-11-19 12:39:10.121266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.858 [2024-11-19 12:39:10.121279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:68928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.858 [2024-11-19 12:39:10.121291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.858 [2024-11-19 12:39:10.121304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:68936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.858 [2024-11-19 12:39:10.121316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.858 [2024-11-19 12:39:10.121329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:68944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.858 [2024-11-19 12:39:10.121341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.858 [2024-11-19 12:39:10.121355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:68952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.858 [2024-11-19 12:39:10.121367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.858 [2024-11-19 12:39:10.121381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:68960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.858 [2024-11-19 12:39:10.121393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.858 [2024-11-19 12:39:10.121406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:68968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.858 [2024-11-19 12:39:10.121447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.858 [2024-11-19 12:39:10.121462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:68976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.858 [2024-11-19 12:39:10.121474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.858 [2024-11-19 12:39:10.121488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:68984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.858 [2024-11-19 12:39:10.121500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.858 [2024-11-19 12:39:10.121513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:68992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.858 [2024-11-19 12:39:10.121525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.858 [2024-11-19 12:39:10.121539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:69000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.858 [2024-11-19 12:39:10.121551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.858 [2024-11-19 12:39:10.121564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:69008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.858 [2024-11-19 12:39:10.121576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.858 [2024-11-19 12:39:10.121589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:69016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.858 [2024-11-19 12:39:10.121601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.858 [2024-11-19 12:39:10.121619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.859 [2024-11-19 12:39:10.121632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.859 [2024-11-19 12:39:10.121645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:69032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.859 [2024-11-19 12:39:10.121656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.859 [2024-11-19 12:39:10.121670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:69040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.859 [2024-11-19 12:39:10.121694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.859 [2024-11-19 12:39:10.121727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:69048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.859 [2024-11-19 12:39:10.121756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.859 [2024-11-19 12:39:10.121771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:69056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.859 [2024-11-19 12:39:10.121784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.859 [2024-11-19 12:39:10.121798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:69064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.859 [2024-11-19 12:39:10.121811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.859 [2024-11-19 12:39:10.121834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:69072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.859 [2024-11-19 12:39:10.121848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.859 [2024-11-19 12:39:10.121863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:69080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.859 [2024-11-19 12:39:10.121877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.859 [2024-11-19 12:39:10.121892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:69088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.859 [2024-11-19 12:39:10.121905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.859 [2024-11-19 12:39:10.121921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:69096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.859 [2024-11-19 12:39:10.121951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.859 [2024-11-19 12:39:10.121968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:69104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.859 [2024-11-19 12:39:10.121981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.859 [2024-11-19 12:39:10.121997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:69112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.859 [2024-11-19 12:39:10.122010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.859 [2024-11-19 12:39:10.122026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:69120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.859 [2024-11-19 12:39:10.122040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.859 [2024-11-19 12:39:10.122056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:69128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.859 [2024-11-19 12:39:10.122069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.859 [2024-11-19 12:39:10.122100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:69136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.859 [2024-11-19 12:39:10.122127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.859 [2024-11-19 12:39:10.122157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:69144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.859 [2024-11-19 12:39:10.122170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.859 [2024-11-19 12:39:10.122186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:69152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.859 [2024-11-19 12:39:10.122199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.859 [2024-11-19 12:39:10.122213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:69160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.859 [2024-11-19 12:39:10.122225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.859 [2024-11-19 12:39:10.122238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:69168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.859 [2024-11-19 12:39:10.122252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.859 [2024-11-19 12:39:10.122272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:69176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.859 [2024-11-19 12:39:10.122285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.859 [2024-11-19 12:39:10.122299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:69184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.859 [2024-11-19 12:39:10.122312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.859 [2024-11-19 12:39:10.122325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:69192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.859 [2024-11-19 12:39:10.122337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.859 [2024-11-19 12:39:10.122351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:69200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.859 [2024-11-19 12:39:10.122364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.859 [2024-11-19 12:39:10.122378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:69208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.859 [2024-11-19 12:39:10.122390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.859 [2024-11-19 12:39:10.122404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:69216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.859 [2024-11-19 12:39:10.122416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.859 [2024-11-19 12:39:10.122430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:69224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.859 [2024-11-19 12:39:10.122442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.859 [2024-11-19 12:39:10.122456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:69232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.859 [2024-11-19 12:39:10.122468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.859 [2024-11-19 12:39:10.122482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:69240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.859 [2024-11-19 12:39:10.122494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.859 [2024-11-19 12:39:10.122508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:69248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.859 [2024-11-19 12:39:10.122521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.859 [2024-11-19 12:39:10.122534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:69256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.859 [2024-11-19 12:39:10.122547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.859 [2024-11-19 12:39:10.122560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:69264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.859 [2024-11-19 12:39:10.122572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.859 [2024-11-19 12:39:10.122586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:69272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.859 [2024-11-19 12:39:10.122603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.859 [2024-11-19 12:39:10.122620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:69280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.859 [2024-11-19 12:39:10.122633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.859 [2024-11-19 12:39:10.122646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:69288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.859 [2024-11-19 12:39:10.122659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.859 [2024-11-19 12:39:10.122672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:69296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.859 [2024-11-19 12:39:10.122684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.859 [2024-11-19 12:39:10.122698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:69304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.859 [2024-11-19 12:39:10.122726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.859 [2024-11-19 12:39:10.122758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:69312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.859 [2024-11-19 12:39:10.122771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.859 [2024-11-19 12:39:10.122795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:69320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.859 [2024-11-19 12:39:10.122811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.859 [2024-11-19 12:39:10.122826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:69328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.859 [2024-11-19 12:39:10.122839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.859 [2024-11-19 12:39:10.122853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:69336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.859 [2024-11-19 12:39:10.122867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.859 [2024-11-19 12:39:10.122881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:69344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.860 [2024-11-19 12:39:10.122894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.860 [2024-11-19 12:39:10.122909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:69352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.860 [2024-11-19 12:39:10.122922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.860 [2024-11-19 12:39:10.122937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:69360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.860 [2024-11-19 12:39:10.122950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.860 [2024-11-19 12:39:10.122964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:69368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.860 [2024-11-19 12:39:10.122977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.860 [2024-11-19 12:39:10.122999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:69376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.860 [2024-11-19 12:39:10.123012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.860 [2024-11-19 12:39:10.123027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:69384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.860 [2024-11-19 12:39:10.123040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.860 [2024-11-19 12:39:10.123055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:69392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.860 [2024-11-19 12:39:10.123068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.860 [2024-11-19 12:39:10.123093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:69400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.860 [2024-11-19 12:39:10.123106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.860 [2024-11-19 12:39:10.123139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:69408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.860 [2024-11-19 12:39:10.123151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.860 [2024-11-19 12:39:10.123165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:69416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.860 [2024-11-19 12:39:10.123203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.860 [2024-11-19 12:39:10.123219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:69424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.860 [2024-11-19 12:39:10.123233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.860 [2024-11-19 12:39:10.123247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:69432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.860 [2024-11-19 12:39:10.123260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.860 [2024-11-19 12:39:10.123275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:69440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.860 [2024-11-19 12:39:10.123288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.860 [2024-11-19 12:39:10.123303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:69448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.860 [2024-11-19 12:39:10.123316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.860 [2024-11-19 12:39:10.123331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:69456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.860 [2024-11-19 12:39:10.123344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.860 [2024-11-19 12:39:10.123358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:69464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.860 [2024-11-19 12:39:10.123372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.860 [2024-11-19 12:39:10.123386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:69472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.860 [2024-11-19 12:39:10.123406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.860 [2024-11-19 12:39:10.123422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:69480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.860 [2024-11-19 12:39:10.123436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.860 [2024-11-19 12:39:10.123451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:69488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.860 [2024-11-19 12:39:10.123464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.860 [2024-11-19 12:39:10.123478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:69496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.860 [2024-11-19 12:39:10.123502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.860 [2024-11-19 12:39:10.123517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:69504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.860 [2024-11-19 12:39:10.123530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.860 [2024-11-19 12:39:10.123544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:69512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.860 [2024-11-19 12:39:10.123556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.860 [2024-11-19 12:39:10.123571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:69520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.860 [2024-11-19 12:39:10.123584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.860 [2024-11-19 12:39:10.123598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:69528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.860 [2024-11-19 12:39:10.123627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.860 [2024-11-19 12:39:10.123643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:69536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.860 [2024-11-19 12:39:10.123656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.860 [2024-11-19 12:39:10.123670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:69544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.860 [2024-11-19 12:39:10.123682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.860 [2024-11-19 12:39:10.123696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:69552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.860 [2024-11-19 12:39:10.123729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.860 [2024-11-19 12:39:10.123745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:69560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.860 [2024-11-19 12:39:10.123758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.860 [2024-11-19 12:39:10.123772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:69568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.860 [2024-11-19 12:39:10.123785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.860 [2024-11-19 12:39:10.123799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:69576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.860 [2024-11-19 12:39:10.123818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.860 [2024-11-19 12:39:10.123833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:69584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.860 [2024-11-19 12:39:10.123847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.860 [2024-11-19 12:39:10.123861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:69592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.860 [2024-11-19 12:39:10.123874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.860 [2024-11-19 12:39:10.123888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:69600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.860 [2024-11-19 12:39:10.123900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.860 [2024-11-19 12:39:10.123920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:69608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.860 [2024-11-19 12:39:10.123933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.860 [2024-11-19 12:39:10.123947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:69616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.860 [2024-11-19 12:39:10.123960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.860 [2024-11-19 12:39:10.123974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:69624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.860 [2024-11-19 12:39:10.123987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.860 [2024-11-19 12:39:10.124001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:69632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.860 [2024-11-19 12:39:10.124014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.860 [2024-11-19 12:39:10.124028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:69640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.860 [2024-11-19 12:39:10.124055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.860 [2024-11-19 12:39:10.124069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:69648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.860 [2024-11-19 12:39:10.124081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.861 [2024-11-19 12:39:10.124095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:69656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.861 [2024-11-19 12:39:10.124107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.861 [2024-11-19 12:39:10.124123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:69664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.861 [2024-11-19 12:39:10.124135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.861 [2024-11-19 12:39:10.124150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:69672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.861 [2024-11-19 12:39:10.124162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.861 [2024-11-19 12:39:10.124181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:69680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.861 [2024-11-19 12:39:10.124194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.861 [2024-11-19 12:39:10.124208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:69688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.861 [2024-11-19 12:39:10.124220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.861 [2024-11-19 12:39:10.124234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:69696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.861 [2024-11-19 12:39:10.124246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.861 [2024-11-19 12:39:10.124260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:69704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.861 [2024-11-19 12:39:10.124272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.861 [2024-11-19 12:39:10.124286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:69712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.861 [2024-11-19 12:39:10.124298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.861 [2024-11-19 12:39:10.124312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:69720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.861 [2024-11-19 12:39:10.124324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.861 [2024-11-19 12:39:10.124338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:69728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.861 [2024-11-19 12:39:10.124350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.861 [2024-11-19 12:39:10.124366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:69736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.861 [2024-11-19 12:39:10.124379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.861 [2024-11-19 12:39:10.124393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:69744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.861 [2024-11-19 12:39:10.124406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.861 [2024-11-19 12:39:10.124419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:69752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.861 [2024-11-19 12:39:10.124432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.861 [2024-11-19 12:39:10.124446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:69760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.861 [2024-11-19 12:39:10.124458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.861 [2024-11-19 12:39:10.124472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:69768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.861 [2024-11-19 12:39:10.124484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.861 [2024-11-19 12:39:10.124498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:68776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.861 [2024-11-19 12:39:10.124515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.861 [2024-11-19 12:39:10.124530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.861 [2024-11-19 12:39:10.124543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.861 [2024-11-19 12:39:10.124558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:68792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.861 [2024-11-19 12:39:10.124571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.861 [2024-11-19 12:39:10.124585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:68800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.861 [2024-11-19 12:39:10.124597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.861 [2024-11-19 12:39:10.124611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:68808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.861 [2024-11-19 12:39:10.124623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.861 [2024-11-19 12:39:10.124637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:68816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.861 [2024-11-19 12:39:10.124649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.861 [2024-11-19 12:39:10.124663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:68824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.861 [2024-11-19 12:39:10.124676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.861 [2024-11-19 12:39:10.124700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:68832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.861 [2024-11-19 12:39:10.124729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.861 [2024-11-19 12:39:10.124744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:68840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.861 [2024-11-19 12:39:10.124757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.861 [2024-11-19 12:39:10.124772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:68848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.861 [2024-11-19 12:39:10.124784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.861 [2024-11-19 12:39:10.124798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:68856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.861 [2024-11-19 12:39:10.124811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.861 [2024-11-19 12:39:10.124828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:68864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.861 [2024-11-19 12:39:10.124841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.861 [2024-11-19 12:39:10.124855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:68872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.861 [2024-11-19 12:39:10.124868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.861 [2024-11-19 12:39:10.124889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:68880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.861 [2024-11-19 12:39:10.124903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.861 [2024-11-19 12:39:10.124917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.861 [2024-11-19 12:39:10.124930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.861 [2024-11-19 12:39:10.124945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:69776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.861 [2024-11-19 12:39:10.124957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.861 [2024-11-19 12:39:10.124972] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1542090 is same with the state(6) to be set 00:19:19.861 [2024-11-19 12:39:10.124987] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:19.861 [2024-11-19 12:39:10.124997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:19.861 [2024-11-19 12:39:10.125008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69784 len:8 PRP1 0x0 PRP2 0x0 00:19:19.861 [2024-11-19 12:39:10.125022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.861 [2024-11-19 12:39:10.125065] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1542090 was disconnected and freed. reset controller. 00:19:19.861 [2024-11-19 12:39:10.125098] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:19:19.861 [2024-11-19 12:39:10.125145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:19.861 [2024-11-19 12:39:10.125165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.861 [2024-11-19 12:39:10.125180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:19.861 [2024-11-19 12:39:10.125192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.861 [2024-11-19 12:39:10.125205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:19.861 [2024-11-19 12:39:10.125218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.861 [2024-11-19 12:39:10.125231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:19.861 [2024-11-19 12:39:10.125243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.861 [2024-11-19 12:39:10.125254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:19.861 [2024-11-19 12:39:10.125312] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1520cc0 (9): Bad file descriptor 00:19:19.861 [2024-11-19 12:39:10.129110] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:19.861 [2024-11-19 12:39:10.164057] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:19.861 7981.50 IOPS, 31.18 MiB/s [2024-11-19T12:39:25.122Z] 8177.00 IOPS, 31.94 MiB/s [2024-11-19T12:39:25.122Z] 8317.75 IOPS, 32.49 MiB/s [2024-11-19T12:39:25.122Z] [2024-11-19 12:39:13.838356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:83720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.862 [2024-11-19 12:39:13.838426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.862 [2024-11-19 12:39:13.838496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:83728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.862 [2024-11-19 12:39:13.838513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.862 [2024-11-19 12:39:13.838528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:83736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.862 [2024-11-19 12:39:13.838540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.862 [2024-11-19 12:39:13.838554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:83744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.862 [2024-11-19 12:39:13.838567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.862 [2024-11-19 12:39:13.838581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:83752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.862 [2024-11-19 12:39:13.838593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.862 [2024-11-19 12:39:13.838607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.862 [2024-11-19 12:39:13.838619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.862 [2024-11-19 12:39:13.838633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:83768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.862 [2024-11-19 12:39:13.838645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.862 [2024-11-19 12:39:13.838659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:83776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.862 [2024-11-19 12:39:13.838671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.862 [2024-11-19 12:39:13.838698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.862 [2024-11-19 12:39:13.838712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.862 [2024-11-19 12:39:13.838727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.862 [2024-11-19 12:39:13.838739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.862 [2024-11-19 12:39:13.838753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.862 [2024-11-19 12:39:13.838765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.862 [2024-11-19 12:39:13.838780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.862 [2024-11-19 12:39:13.838792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.862 [2024-11-19 12:39:13.838806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:83304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.862 [2024-11-19 12:39:13.838818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.862 [2024-11-19 12:39:13.838832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.862 [2024-11-19 12:39:13.838853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.862 [2024-11-19 12:39:13.838867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:83320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.862 [2024-11-19 12:39:13.838880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.862 [2024-11-19 12:39:13.838894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:83328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.862 [2024-11-19 12:39:13.838906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.862 [2024-11-19 12:39:13.838920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:83784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.862 [2024-11-19 12:39:13.838932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.862 [2024-11-19 12:39:13.838948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:83792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.862 [2024-11-19 12:39:13.838961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.862 [2024-11-19 12:39:13.838975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.862 [2024-11-19 12:39:13.838987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.862 [2024-11-19 12:39:13.839001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:83808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.862 [2024-11-19 12:39:13.839013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.862 [2024-11-19 12:39:13.839026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:83816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.862 [2024-11-19 12:39:13.839039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.862 [2024-11-19 12:39:13.839054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.862 [2024-11-19 12:39:13.839067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.862 [2024-11-19 12:39:13.839080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:83832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.862 [2024-11-19 12:39:13.839092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.862 [2024-11-19 12:39:13.839107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:83840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.862 [2024-11-19 12:39:13.839119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.862 [2024-11-19 12:39:13.839133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.862 [2024-11-19 12:39:13.839145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.862 [2024-11-19 12:39:13.839158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:83856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.862 [2024-11-19 12:39:13.839171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.862 [2024-11-19 12:39:13.839234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:83864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.862 [2024-11-19 12:39:13.839249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.862 [2024-11-19 12:39:13.839264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:83872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.862 [2024-11-19 12:39:13.839277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.862 [2024-11-19 12:39:13.839291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.862 [2024-11-19 12:39:13.839305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.862 [2024-11-19 12:39:13.839319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.862 [2024-11-19 12:39:13.839332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.862 [2024-11-19 12:39:13.839347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:83896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.862 [2024-11-19 12:39:13.839360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.862 [2024-11-19 12:39:13.839375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.862 [2024-11-19 12:39:13.839388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.862 [2024-11-19 12:39:13.839402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.862 [2024-11-19 12:39:13.839416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.863 [2024-11-19 12:39:13.839431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.863 [2024-11-19 12:39:13.839444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.863 [2024-11-19 12:39:13.839459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.863 [2024-11-19 12:39:13.839473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.863 [2024-11-19 12:39:13.839487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:83936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.863 [2024-11-19 12:39:13.839500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.863 [2024-11-19 12:39:13.839529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:83944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.863 [2024-11-19 12:39:13.839542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.863 [2024-11-19 12:39:13.839556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.863 [2024-11-19 12:39:13.839569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.863 [2024-11-19 12:39:13.839598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.863 [2024-11-19 12:39:13.839610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.863 [2024-11-19 12:39:13.839629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:83336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.863 [2024-11-19 12:39:13.839643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.863 [2024-11-19 12:39:13.839657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:83344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.863 [2024-11-19 12:39:13.839669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.863 [2024-11-19 12:39:13.839682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:83352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.863 [2024-11-19 12:39:13.839694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.863 [2024-11-19 12:39:13.839708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:83360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.863 [2024-11-19 12:39:13.839721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.863 [2024-11-19 12:39:13.839746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.863 [2024-11-19 12:39:13.839759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.863 [2024-11-19 12:39:13.839773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:83376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.863 [2024-11-19 12:39:13.839785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.863 [2024-11-19 12:39:13.839799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:83384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.863 [2024-11-19 12:39:13.839811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.863 [2024-11-19 12:39:13.839825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:83392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.863 [2024-11-19 12:39:13.839838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.863 [2024-11-19 12:39:13.839851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.863 [2024-11-19 12:39:13.839864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.863 [2024-11-19 12:39:13.839877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.863 [2024-11-19 12:39:13.839890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.863 [2024-11-19 12:39:13.839904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.863 [2024-11-19 12:39:13.839917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.863 [2024-11-19 12:39:13.839932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.863 [2024-11-19 12:39:13.839944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.863 [2024-11-19 12:39:13.839958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.863 [2024-11-19 12:39:13.839977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.863 [2024-11-19 12:39:13.839992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.863 [2024-11-19 12:39:13.840004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.863 [2024-11-19 12:39:13.840018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.863 [2024-11-19 12:39:13.840030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.863 [2024-11-19 12:39:13.840044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.863 [2024-11-19 12:39:13.840056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.863 [2024-11-19 12:39:13.840070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.863 [2024-11-19 12:39:13.840082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.863 [2024-11-19 12:39:13.840097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.863 [2024-11-19 12:39:13.840109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.863 [2024-11-19 12:39:13.840123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.863 [2024-11-19 12:39:13.840135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.863 [2024-11-19 12:39:13.840150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.863 [2024-11-19 12:39:13.840162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.863 [2024-11-19 12:39:13.840176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.863 [2024-11-19 12:39:13.840188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.863 [2024-11-19 12:39:13.840202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.863 [2024-11-19 12:39:13.840214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.863 [2024-11-19 12:39:13.840228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.863 [2024-11-19 12:39:13.840240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.863 [2024-11-19 12:39:13.840254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.863 [2024-11-19 12:39:13.840266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.863 [2024-11-19 12:39:13.840280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:83400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.863 [2024-11-19 12:39:13.840292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.863 [2024-11-19 12:39:13.840311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:83408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.863 [2024-11-19 12:39:13.840324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.863 [2024-11-19 12:39:13.840338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:83416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.863 [2024-11-19 12:39:13.840351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.863 [2024-11-19 12:39:13.840364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.863 [2024-11-19 12:39:13.840378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.863 [2024-11-19 12:39:13.840392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:83432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.863 [2024-11-19 12:39:13.840404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.863 [2024-11-19 12:39:13.840418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:83440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.863 [2024-11-19 12:39:13.840430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.863 [2024-11-19 12:39:13.840444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:83448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.863 [2024-11-19 12:39:13.840456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.863 [2024-11-19 12:39:13.840470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.863 [2024-11-19 12:39:13.840482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.863 [2024-11-19 12:39:13.840496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.863 [2024-11-19 12:39:13.840508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.863 [2024-11-19 12:39:13.840522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:83472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.863 [2024-11-19 12:39:13.840535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.864 [2024-11-19 12:39:13.840548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:83480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.864 [2024-11-19 12:39:13.840561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.864 [2024-11-19 12:39:13.840575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:83488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.864 [2024-11-19 12:39:13.840587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.864 [2024-11-19 12:39:13.840601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.864 [2024-11-19 12:39:13.840613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.864 [2024-11-19 12:39:13.840627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:83504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.864 [2024-11-19 12:39:13.840645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.864 [2024-11-19 12:39:13.840660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:83512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.864 [2024-11-19 12:39:13.840700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.864 [2024-11-19 12:39:13.840715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:83520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.864 [2024-11-19 12:39:13.840728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.864 [2024-11-19 12:39:13.840742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:83528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.864 [2024-11-19 12:39:13.840755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.864 [2024-11-19 12:39:13.840769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:83536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.864 [2024-11-19 12:39:13.840781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.864 [2024-11-19 12:39:13.840796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:83544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.864 [2024-11-19 12:39:13.840809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.864 [2024-11-19 12:39:13.840824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:83552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.864 [2024-11-19 12:39:13.840837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.864 [2024-11-19 12:39:13.840851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:83560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.864 [2024-11-19 12:39:13.840864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.864 [2024-11-19 12:39:13.840879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:83568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.864 [2024-11-19 12:39:13.840891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.864 [2024-11-19 12:39:13.840905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:83576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.864 [2024-11-19 12:39:13.840918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.864 [2024-11-19 12:39:13.840933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:83584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.864 [2024-11-19 12:39:13.840946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.864 [2024-11-19 12:39:13.840960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.864 [2024-11-19 12:39:13.840972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.864 [2024-11-19 12:39:13.840987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.864 [2024-11-19 12:39:13.840999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.864 [2024-11-19 12:39:13.841014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.864 [2024-11-19 12:39:13.841052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.864 [2024-11-19 12:39:13.841068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.864 [2024-11-19 12:39:13.841081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.864 [2024-11-19 12:39:13.841110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.864 [2024-11-19 12:39:13.841123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.864 [2024-11-19 12:39:13.841137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.864 [2024-11-19 12:39:13.841150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.864 [2024-11-19 12:39:13.841163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.864 [2024-11-19 12:39:13.841176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.864 [2024-11-19 12:39:13.841189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.864 [2024-11-19 12:39:13.841202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.864 [2024-11-19 12:39:13.841215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.864 [2024-11-19 12:39:13.841227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.864 [2024-11-19 12:39:13.841241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.864 [2024-11-19 12:39:13.841254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.864 [2024-11-19 12:39:13.841268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:84176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.864 [2024-11-19 12:39:13.841280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.864 [2024-11-19 12:39:13.841295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:84184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.864 [2024-11-19 12:39:13.841307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.864 [2024-11-19 12:39:13.841321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.864 [2024-11-19 12:39:13.841333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.864 [2024-11-19 12:39:13.841348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:83600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.864 [2024-11-19 12:39:13.841360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.864 [2024-11-19 12:39:13.841374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.864 [2024-11-19 12:39:13.841386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.864 [2024-11-19 12:39:13.841408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.864 [2024-11-19 12:39:13.841421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.864 [2024-11-19 12:39:13.841435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:83624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.864 [2024-11-19 12:39:13.841448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.864 [2024-11-19 12:39:13.841461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:83632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.864 [2024-11-19 12:39:13.841474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.864 [2024-11-19 12:39:13.841487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:83640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.864 [2024-11-19 12:39:13.841500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.864 [2024-11-19 12:39:13.841514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.864 [2024-11-19 12:39:13.841526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.864 [2024-11-19 12:39:13.841540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.864 [2024-11-19 12:39:13.841553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.864 [2024-11-19 12:39:13.841566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:83664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.864 [2024-11-19 12:39:13.841578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.864 [2024-11-19 12:39:13.841592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:83672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.864 [2024-11-19 12:39:13.841604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.864 [2024-11-19 12:39:13.841618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:83680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.864 [2024-11-19 12:39:13.841632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.864 [2024-11-19 12:39:13.841646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:83688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.864 [2024-11-19 12:39:13.841658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.864 [2024-11-19 12:39:13.841671] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1542d80 is same with the state(6) to be set 00:19:19.864 [2024-11-19 12:39:13.841687] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:19.865 [2024-11-19 12:39:13.841705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:19.865 [2024-11-19 12:39:13.841717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83696 len:8 PRP1 0x0 PRP2 0x0 00:19:19.865 [2024-11-19 12:39:13.841730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.865 [2024-11-19 12:39:13.841761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:19.865 [2024-11-19 12:39:13.841781] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:19.865 [2024-11-19 12:39:13.841792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83704 len:8 PRP1 0x0 PRP2 0x0 00:19:19.865 [2024-11-19 12:39:13.841804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.865 [2024-11-19 12:39:13.841817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:19.865 [2024-11-19 12:39:13.841827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:19.865 [2024-11-19 12:39:13.841836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83712 len:8 PRP1 0x0 PRP2 0x0 00:19:19.865 [2024-11-19 12:39:13.841848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.865 [2024-11-19 12:39:13.841861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:19.865 [2024-11-19 12:39:13.841870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:19.865 [2024-11-19 12:39:13.841879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84192 len:8 PRP1 0x0 PRP2 0x0 00:19:19.865 [2024-11-19 12:39:13.841891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.865 [2024-11-19 12:39:13.841904] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:19.865 [2024-11-19 12:39:13.841913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:19.865 [2024-11-19 12:39:13.841922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84200 len:8 PRP1 0x0 PRP2 0x0 00:19:19.865 [2024-11-19 12:39:13.841935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.865 [2024-11-19 12:39:13.841947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:19.865 [2024-11-19 12:39:13.841963] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:19.865 [2024-11-19 12:39:13.841973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84208 len:8 PRP1 0x0 PRP2 0x0 00:19:19.865 [2024-11-19 12:39:13.841985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.865 [2024-11-19 12:39:13.841998] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:19.865 [2024-11-19 12:39:13.842007] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:19.865 [2024-11-19 12:39:13.842016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84216 len:8 PRP1 0x0 PRP2 0x0 00:19:19.865 [2024-11-19 12:39:13.842028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.865 [2024-11-19 12:39:13.842041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:19.865 [2024-11-19 12:39:13.842050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:19.865 [2024-11-19 12:39:13.842059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84224 len:8 PRP1 0x0 PRP2 0x0 00:19:19.865 [2024-11-19 12:39:13.842071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.865 [2024-11-19 12:39:13.842084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:19.865 [2024-11-19 12:39:13.842093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:19.865 [2024-11-19 12:39:13.842103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84232 len:8 PRP1 0x0 PRP2 0x0 00:19:19.865 [2024-11-19 12:39:13.842115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.865 [2024-11-19 12:39:13.842134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:19.865 [2024-11-19 12:39:13.842144] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:19.865 [2024-11-19 12:39:13.842153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84240 len:8 PRP1 0x0 PRP2 0x0 00:19:19.865 [2024-11-19 12:39:13.842166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.865 [2024-11-19 12:39:13.842178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:19.865 [2024-11-19 12:39:13.842187] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:19.865 [2024-11-19 12:39:13.842197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84248 len:8 PRP1 0x0 PRP2 0x0 00:19:19.865 [2024-11-19 12:39:13.842209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.865 [2024-11-19 12:39:13.842222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:19.865 [2024-11-19 12:39:13.842231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:19.865 [2024-11-19 12:39:13.842240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84256 len:8 PRP1 0x0 PRP2 0x0 00:19:19.865 [2024-11-19 12:39:13.842252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.865 [2024-11-19 12:39:13.842265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:19.865 [2024-11-19 12:39:13.842274] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:19.865 [2024-11-19 12:39:13.842284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84264 len:8 PRP1 0x0 PRP2 0x0 00:19:19.865 [2024-11-19 12:39:13.842296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.865 [2024-11-19 12:39:13.842309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:19.865 [2024-11-19 12:39:13.842320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:19.865 [2024-11-19 12:39:13.842329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84272 len:8 PRP1 0x0 PRP2 0x0 00:19:19.865 [2024-11-19 12:39:13.842341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.865 [2024-11-19 12:39:13.842353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:19.865 [2024-11-19 12:39:13.842362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:19.865 [2024-11-19 12:39:13.842372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84280 len:8 PRP1 0x0 PRP2 0x0 00:19:19.865 [2024-11-19 12:39:13.842384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.865 [2024-11-19 12:39:13.842397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:19.865 [2024-11-19 12:39:13.842406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:19.865 [2024-11-19 12:39:13.842416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84288 len:8 PRP1 0x0 PRP2 0x0 00:19:19.865 [2024-11-19 12:39:13.842428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.865 [2024-11-19 12:39:13.842471] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1542d80 was disconnected and freed. reset controller. 00:19:19.865 [2024-11-19 12:39:13.842492] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:19:19.865 [2024-11-19 12:39:13.842550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:19.865 [2024-11-19 12:39:13.842572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.865 [2024-11-19 12:39:13.842586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:19.865 [2024-11-19 12:39:13.842599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.865 [2024-11-19 12:39:13.842612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:19.865 [2024-11-19 12:39:13.842624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.865 [2024-11-19 12:39:13.842638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:19.865 [2024-11-19 12:39:13.842651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.865 [2024-11-19 12:39:13.842663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:19.865 [2024-11-19 12:39:13.846347] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:19.865 [2024-11-19 12:39:13.846384] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1520cc0 (9): Bad file descriptor 00:19:19.865 [2024-11-19 12:39:13.878899] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:19.865 8528.60 IOPS, 33.31 MiB/s [2024-11-19T12:39:25.125Z] 8768.83 IOPS, 34.25 MiB/s [2024-11-19T12:39:25.125Z] 9003.57 IOPS, 35.17 MiB/s [2024-11-19T12:39:25.125Z] 9182.12 IOPS, 35.87 MiB/s [2024-11-19T12:39:25.125Z] 9317.67 IOPS, 36.40 MiB/s [2024-11-19T12:39:25.125Z] [2024-11-19 12:39:18.434796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:72888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.865 [2024-11-19 12:39:18.434857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.865 [2024-11-19 12:39:18.434900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.865 [2024-11-19 12:39:18.434914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.865 [2024-11-19 12:39:18.434928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.865 [2024-11-19 12:39:18.434940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.865 [2024-11-19 12:39:18.434953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.865 [2024-11-19 12:39:18.434965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.865 [2024-11-19 12:39:18.434979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:72920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.865 [2024-11-19 12:39:18.434991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.865 [2024-11-19 12:39:18.435004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.866 [2024-11-19 12:39:18.435016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.866 [2024-11-19 12:39:18.435030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.866 [2024-11-19 12:39:18.435042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.866 [2024-11-19 12:39:18.435077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.866 [2024-11-19 12:39:18.435091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.866 [2024-11-19 12:39:18.435105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.866 [2024-11-19 12:39:18.435117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.866 [2024-11-19 12:39:18.435130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:72960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.866 [2024-11-19 12:39:18.435142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.866 [2024-11-19 12:39:18.435155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.866 [2024-11-19 12:39:18.435167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.866 [2024-11-19 12:39:18.435205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.866 [2024-11-19 12:39:18.435235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.866 [2024-11-19 12:39:18.435249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:72984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.866 [2024-11-19 12:39:18.435262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.866 [2024-11-19 12:39:18.435276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.866 [2024-11-19 12:39:18.435289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.866 [2024-11-19 12:39:18.435303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:73000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.866 [2024-11-19 12:39:18.435315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.866 [2024-11-19 12:39:18.435329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:73008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.866 [2024-11-19 12:39:18.435341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.866 [2024-11-19 12:39:18.435355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:72504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.866 [2024-11-19 12:39:18.435368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.866 [2024-11-19 12:39:18.435385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:72512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.866 [2024-11-19 12:39:18.435397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.866 [2024-11-19 12:39:18.435412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:72520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.866 [2024-11-19 12:39:18.435424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.866 [2024-11-19 12:39:18.435438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:72528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.866 [2024-11-19 12:39:18.435461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.866 [2024-11-19 12:39:18.435477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:72536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.866 [2024-11-19 12:39:18.435489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.866 [2024-11-19 12:39:18.435534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.866 [2024-11-19 12:39:18.435547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.866 [2024-11-19 12:39:18.435576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:72552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.866 [2024-11-19 12:39:18.435588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.866 [2024-11-19 12:39:18.435601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.866 [2024-11-19 12:39:18.435614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.866 [2024-11-19 12:39:18.435628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:73016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.866 [2024-11-19 12:39:18.435641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.866 [2024-11-19 12:39:18.435655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.866 [2024-11-19 12:39:18.435667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.866 [2024-11-19 12:39:18.435681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:73032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.866 [2024-11-19 12:39:18.435693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.866 [2024-11-19 12:39:18.435707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:73040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.866 [2024-11-19 12:39:18.435719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.866 [2024-11-19 12:39:18.435733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:73048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.866 [2024-11-19 12:39:18.435757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.866 [2024-11-19 12:39:18.435773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.866 [2024-11-19 12:39:18.435785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.866 [2024-11-19 12:39:18.435799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.866 [2024-11-19 12:39:18.435812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.866 [2024-11-19 12:39:18.435825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:73072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.866 [2024-11-19 12:39:18.435838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.866 [2024-11-19 12:39:18.435859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:73080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.866 [2024-11-19 12:39:18.435873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.866 [2024-11-19 12:39:18.435887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:73088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.866 [2024-11-19 12:39:18.435900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.866 [2024-11-19 12:39:18.435914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.866 [2024-11-19 12:39:18.435927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.866 [2024-11-19 12:39:18.435941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:73104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.866 [2024-11-19 12:39:18.435953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.866 [2024-11-19 12:39:18.435967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:73112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.866 [2024-11-19 12:39:18.435979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.866 [2024-11-19 12:39:18.435993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.866 [2024-11-19 12:39:18.436005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.866 [2024-11-19 12:39:18.436019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:73128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.866 [2024-11-19 12:39:18.436031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.866 [2024-11-19 12:39:18.436045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:73136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.866 [2024-11-19 12:39:18.436057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.866 [2024-11-19 12:39:18.436071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:73144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.866 [2024-11-19 12:39:18.436083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.866 [2024-11-19 12:39:18.436098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:73152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.866 [2024-11-19 12:39:18.436110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.866 [2024-11-19 12:39:18.436123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:73160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.866 [2024-11-19 12:39:18.436136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.866 [2024-11-19 12:39:18.436149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.866 [2024-11-19 12:39:18.436162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.866 [2024-11-19 12:39:18.436175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:72568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.866 [2024-11-19 12:39:18.436207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.866 [2024-11-19 12:39:18.436222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.867 [2024-11-19 12:39:18.436234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.867 [2024-11-19 12:39:18.436248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:72584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.867 [2024-11-19 12:39:18.436260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.867 [2024-11-19 12:39:18.436274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:72592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.867 [2024-11-19 12:39:18.436286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.867 [2024-11-19 12:39:18.436299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:72600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.867 [2024-11-19 12:39:18.436311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.867 [2024-11-19 12:39:18.436325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:72608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.867 [2024-11-19 12:39:18.436337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.867 [2024-11-19 12:39:18.436351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:72616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.867 [2024-11-19 12:39:18.436363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.867 [2024-11-19 12:39:18.436377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:72624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.867 [2024-11-19 12:39:18.436389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.867 [2024-11-19 12:39:18.436402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:72632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.867 [2024-11-19 12:39:18.436414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.867 [2024-11-19 12:39:18.436428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:72640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.867 [2024-11-19 12:39:18.436440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.867 [2024-11-19 12:39:18.436453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:72648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.867 [2024-11-19 12:39:18.436465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.867 [2024-11-19 12:39:18.436478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:72656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.867 [2024-11-19 12:39:18.436490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.867 [2024-11-19 12:39:18.436504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:72664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.867 [2024-11-19 12:39:18.436516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.867 [2024-11-19 12:39:18.436534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.867 [2024-11-19 12:39:18.436548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.867 [2024-11-19 12:39:18.436561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:72680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.867 [2024-11-19 12:39:18.436573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.867 [2024-11-19 12:39:18.436587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:72688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.867 [2024-11-19 12:39:18.436599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.867 [2024-11-19 12:39:18.436613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:73176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.867 [2024-11-19 12:39:18.436625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.867 [2024-11-19 12:39:18.436638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:73184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.867 [2024-11-19 12:39:18.436650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.867 [2024-11-19 12:39:18.436663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:73192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.867 [2024-11-19 12:39:18.436675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.867 [2024-11-19 12:39:18.436700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:73200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.867 [2024-11-19 12:39:18.436713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.867 [2024-11-19 12:39:18.436726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:73208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.867 [2024-11-19 12:39:18.436738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.867 [2024-11-19 12:39:18.436752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:73216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.867 [2024-11-19 12:39:18.436765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.867 [2024-11-19 12:39:18.436779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:73224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.867 [2024-11-19 12:39:18.436791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.867 [2024-11-19 12:39:18.436804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:73232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.867 [2024-11-19 12:39:18.436832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.867 [2024-11-19 12:39:18.436846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:73240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.867 [2024-11-19 12:39:18.436859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.867 [2024-11-19 12:39:18.436872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:73248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.867 [2024-11-19 12:39:18.436884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.867 [2024-11-19 12:39:18.436905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:73256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.867 [2024-11-19 12:39:18.436918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.867 [2024-11-19 12:39:18.436932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:73264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.867 [2024-11-19 12:39:18.436946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.867 [2024-11-19 12:39:18.436960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:73272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.867 [2024-11-19 12:39:18.436972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.867 [2024-11-19 12:39:18.436986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:73280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.867 [2024-11-19 12:39:18.436999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.867 [2024-11-19 12:39:18.437012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.868 [2024-11-19 12:39:18.437025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.868 [2024-11-19 12:39:18.437038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:72704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.868 [2024-11-19 12:39:18.437051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.868 [2024-11-19 12:39:18.437065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:72712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.868 [2024-11-19 12:39:18.437077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.868 [2024-11-19 12:39:18.437091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:72720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.868 [2024-11-19 12:39:18.437103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.868 [2024-11-19 12:39:18.437118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:72728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.868 [2024-11-19 12:39:18.437130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.868 [2024-11-19 12:39:18.437143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:72736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.868 [2024-11-19 12:39:18.437155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.868 [2024-11-19 12:39:18.437169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:72744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.868 [2024-11-19 12:39:18.437181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.868 [2024-11-19 12:39:18.437196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:72752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.868 [2024-11-19 12:39:18.437209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.868 [2024-11-19 12:39:18.437223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:72760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.868 [2024-11-19 12:39:18.437241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.868 [2024-11-19 12:39:18.437255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:72768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.868 [2024-11-19 12:39:18.437282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.868 [2024-11-19 12:39:18.437295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:72776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.868 [2024-11-19 12:39:18.437307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.868 [2024-11-19 12:39:18.437321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:72784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.868 [2024-11-19 12:39:18.437333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.868 [2024-11-19 12:39:18.437346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.868 [2024-11-19 12:39:18.437358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.868 [2024-11-19 12:39:18.437372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:72800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.868 [2024-11-19 12:39:18.437384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.868 [2024-11-19 12:39:18.437397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:72808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.868 [2024-11-19 12:39:18.437409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.868 [2024-11-19 12:39:18.437423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:72816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.868 [2024-11-19 12:39:18.437435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.868 [2024-11-19 12:39:18.437448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:73288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.868 [2024-11-19 12:39:18.437460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.868 [2024-11-19 12:39:18.437473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:73296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.868 [2024-11-19 12:39:18.437485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.868 [2024-11-19 12:39:18.437498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.868 [2024-11-19 12:39:18.437510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.868 [2024-11-19 12:39:18.437524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:73312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.868 [2024-11-19 12:39:18.437536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.868 [2024-11-19 12:39:18.437550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:73320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.868 [2024-11-19 12:39:18.437562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.868 [2024-11-19 12:39:18.437580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:73328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.868 [2024-11-19 12:39:18.437593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.868 [2024-11-19 12:39:18.437606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:73336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.868 [2024-11-19 12:39:18.437618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.868 [2024-11-19 12:39:18.437636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:73344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.868 [2024-11-19 12:39:18.437648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.868 [2024-11-19 12:39:18.437662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:73352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.868 [2024-11-19 12:39:18.437689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.868 [2024-11-19 12:39:18.437703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.868 [2024-11-19 12:39:18.437724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.868 [2024-11-19 12:39:18.437739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.868 [2024-11-19 12:39:18.437752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.868 [2024-11-19 12:39:18.437766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:73376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.868 [2024-11-19 12:39:18.437779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.868 [2024-11-19 12:39:18.437793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:73384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.868 [2024-11-19 12:39:18.437805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.868 [2024-11-19 12:39:18.437818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:73392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:19.868 [2024-11-19 12:39:18.437831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.868 [2024-11-19 12:39:18.437845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:72824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.868 [2024-11-19 12:39:18.437857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.868 [2024-11-19 12:39:18.437871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:72832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.868 [2024-11-19 12:39:18.437883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.868 [2024-11-19 12:39:18.437896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.868 [2024-11-19 12:39:18.437909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.868 [2024-11-19 12:39:18.437923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:72848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.868 [2024-11-19 12:39:18.437941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.868 [2024-11-19 12:39:18.437956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:72856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.868 [2024-11-19 12:39:18.437968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.868 [2024-11-19 12:39:18.437982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:72864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.868 [2024-11-19 12:39:18.437994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.868 [2024-11-19 12:39:18.438008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:72872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:19.868 [2024-11-19 12:39:18.438021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.868 [2024-11-19 12:39:18.438034] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1680220 is same with the state(6) to be set 00:19:19.868 [2024-11-19 12:39:18.438049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:19.868 [2024-11-19 12:39:18.438059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:19.868 [2024-11-19 12:39:18.438083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72880 len:8 PRP1 0x0 PRP2 0x0 00:19:19.868 [2024-11-19 12:39:18.438097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.868 [2024-11-19 12:39:18.438110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:19.869 [2024-11-19 12:39:18.438118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:19.869 [2024-11-19 12:39:18.438128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73400 len:8 PRP1 0x0 PRP2 0x0 00:19:19.869 [2024-11-19 12:39:18.438139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.869 [2024-11-19 12:39:18.438151] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:19.869 [2024-11-19 12:39:18.438160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:19.869 [2024-11-19 12:39:18.438169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73408 len:8 PRP1 0x0 PRP2 0x0 00:19:19.869 [2024-11-19 12:39:18.438181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.869 [2024-11-19 12:39:18.438193] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:19.869 [2024-11-19 12:39:18.438201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:19.869 [2024-11-19 12:39:18.438210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73416 len:8 PRP1 0x0 PRP2 0x0 00:19:19.869 [2024-11-19 12:39:18.438221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.869 [2024-11-19 12:39:18.438233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:19.869 [2024-11-19 12:39:18.438242] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:19.869 [2024-11-19 12:39:18.438251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73424 len:8 PRP1 0x0 PRP2 0x0 00:19:19.869 [2024-11-19 12:39:18.438262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.869 [2024-11-19 12:39:18.438273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:19.869 [2024-11-19 12:39:18.438282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:19.869 [2024-11-19 12:39:18.438297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73432 len:8 PRP1 0x0 PRP2 0x0 00:19:19.869 [2024-11-19 12:39:18.438315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.869 [2024-11-19 12:39:18.438327] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:19.869 [2024-11-19 12:39:18.438336] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:19.869 [2024-11-19 12:39:18.438345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73440 len:8 PRP1 0x0 PRP2 0x0 00:19:19.869 [2024-11-19 12:39:18.438356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.869 [2024-11-19 12:39:18.438368] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:19.869 [2024-11-19 12:39:18.438376] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:19.869 [2024-11-19 12:39:18.438386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73448 len:8 PRP1 0x0 PRP2 0x0 00:19:19.869 [2024-11-19 12:39:18.438397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.869 [2024-11-19 12:39:18.438408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:19.869 [2024-11-19 12:39:18.438417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:19.869 [2024-11-19 12:39:18.438426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73456 len:8 PRP1 0x0 PRP2 0x0 00:19:19.869 [2024-11-19 12:39:18.438440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.869 [2024-11-19 12:39:18.438452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:19.869 [2024-11-19 12:39:18.438461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:19.869 [2024-11-19 12:39:18.438470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73464 len:8 PRP1 0x0 PRP2 0x0 00:19:19.869 [2024-11-19 12:39:18.438481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.869 [2024-11-19 12:39:18.438510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:19.869 [2024-11-19 12:39:18.438519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:19.869 [2024-11-19 12:39:18.438528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73472 len:8 PRP1 0x0 PRP2 0x0 00:19:19.869 [2024-11-19 12:39:18.438539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.869 [2024-11-19 12:39:18.438551] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:19.869 [2024-11-19 12:39:18.438560] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:19.869 [2024-11-19 12:39:18.438569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73480 len:8 PRP1 0x0 PRP2 0x0 00:19:19.869 [2024-11-19 12:39:18.438581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.869 [2024-11-19 12:39:18.438593] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:19.869 [2024-11-19 12:39:18.438602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:19.869 [2024-11-19 12:39:18.438611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73488 len:8 PRP1 0x0 PRP2 0x0 00:19:19.869 [2024-11-19 12:39:18.438622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.869 [2024-11-19 12:39:18.438640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:19.869 [2024-11-19 12:39:18.438651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:19.869 [2024-11-19 12:39:18.438661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73496 len:8 PRP1 0x0 PRP2 0x0 00:19:19.869 [2024-11-19 12:39:18.438674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.869 [2024-11-19 12:39:18.438687] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:19.869 [2024-11-19 12:39:18.438707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:19.869 [2024-11-19 12:39:18.438718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73504 len:8 PRP1 0x0 PRP2 0x0 00:19:19.869 [2024-11-19 12:39:18.438730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.869 [2024-11-19 12:39:18.438742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:19.869 [2024-11-19 12:39:18.438752] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:19.869 [2024-11-19 12:39:18.438761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73512 len:8 PRP1 0x0 PRP2 0x0 00:19:19.869 [2024-11-19 12:39:18.438773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.869 [2024-11-19 12:39:18.438785] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:19.869 [2024-11-19 12:39:18.438794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:19.869 [2024-11-19 12:39:18.438803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73520 len:8 PRP1 0x0 PRP2 0x0 00:19:19.869 [2024-11-19 12:39:18.438817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.869 [2024-11-19 12:39:18.438859] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1680220 was disconnected and freed. reset controller. 00:19:19.869 [2024-11-19 12:39:18.438875] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:19:19.869 [2024-11-19 12:39:18.438924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:19.869 [2024-11-19 12:39:18.438944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.869 [2024-11-19 12:39:18.438958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:19.869 [2024-11-19 12:39:18.438970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.869 [2024-11-19 12:39:18.438983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:19.869 [2024-11-19 12:39:18.438995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.869 [2024-11-19 12:39:18.439008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:19.869 [2024-11-19 12:39:18.439020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:19.869 [2024-11-19 12:39:18.439032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:19.869 [2024-11-19 12:39:18.439061] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1520cc0 (9): Bad file descriptor 00:19:19.869 [2024-11-19 12:39:18.442614] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:19.869 [2024-11-19 12:39:18.476736] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:19.869 9359.20 IOPS, 36.56 MiB/s [2024-11-19T12:39:25.129Z] 9419.64 IOPS, 36.80 MiB/s [2024-11-19T12:39:25.129Z] 9466.67 IOPS, 36.98 MiB/s [2024-11-19T12:39:25.129Z] 9506.46 IOPS, 37.13 MiB/s [2024-11-19T12:39:25.129Z] 9546.86 IOPS, 37.29 MiB/s [2024-11-19T12:39:25.129Z] 9550.40 IOPS, 37.31 MiB/s 00:19:19.869 Latency(us) 00:19:19.869 [2024-11-19T12:39:25.129Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:19.869 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:19.869 Verification LBA range: start 0x0 length 0x4000 00:19:19.869 NVMe0n1 : 15.01 9550.63 37.31 216.18 0.00 13075.34 539.93 15847.80 00:19:19.869 [2024-11-19T12:39:25.129Z] =================================================================================================================== 00:19:19.869 [2024-11-19T12:39:25.129Z] Total : 9550.63 37.31 216.18 0.00 13075.34 539.93 15847.80 00:19:19.869 Received shutdown signal, test time was about 15.000000 seconds 00:19:19.869 00:19:19.869 Latency(us) 00:19:19.869 [2024-11-19T12:39:25.129Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:19.869 [2024-11-19T12:39:25.129Z] =================================================================================================================== 00:19:19.869 [2024-11-19T12:39:25.129Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:19.869 12:39:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:19:19.870 12:39:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:19:19.870 12:39:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:19:19.870 12:39:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=90858 00:19:19.870 12:39:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:19:19.870 12:39:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 90858 /var/tmp/bdevperf.sock 00:19:19.870 12:39:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 90858 ']' 00:19:19.870 12:39:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:19.870 12:39:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:19.870 12:39:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:19.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:19.870 12:39:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:19.870 12:39:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:19.870 12:39:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:19.870 12:39:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:19:19.870 12:39:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:19.870 [2024-11-19 12:39:24.678493] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:19.870 12:39:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:19:19.870 [2024-11-19 12:39:24.922727] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:19:19.870 12:39:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:20.128 NVMe0n1 00:19:20.128 12:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:20.386 00:19:20.386 12:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:20.953 00:19:20.953 12:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:20.953 12:39:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:19:20.953 12:39:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:21.520 12:39:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:19:24.818 12:39:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:24.818 12:39:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:19:24.818 12:39:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=90928 00:19:24.818 12:39:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 90928 00:19:24.818 12:39:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:25.754 { 00:19:25.755 "results": [ 00:19:25.755 { 00:19:25.755 "job": "NVMe0n1", 00:19:25.755 "core_mask": "0x1", 00:19:25.755 "workload": "verify", 00:19:25.755 "status": "finished", 00:19:25.755 "verify_range": { 00:19:25.755 "start": 0, 00:19:25.755 "length": 16384 00:19:25.755 }, 00:19:25.755 "queue_depth": 128, 00:19:25.755 "io_size": 4096, 00:19:25.755 "runtime": 1.008026, 00:19:25.755 "iops": 7512.703045357957, 00:19:25.755 "mibps": 29.34649627092952, 00:19:25.755 "io_failed": 0, 00:19:25.755 "io_timeout": 0, 00:19:25.755 "avg_latency_us": 16972.009040730827, 00:19:25.755 "min_latency_us": 2174.6036363636363, 00:19:25.755 "max_latency_us": 14179.607272727273 00:19:25.755 } 00:19:25.755 ], 00:19:25.755 "core_count": 1 00:19:25.755 } 00:19:25.755 12:39:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:25.755 [2024-11-19 12:39:24.197752] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:19:25.755 [2024-11-19 12:39:24.197868] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90858 ] 00:19:25.755 [2024-11-19 12:39:24.335915] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.755 [2024-11-19 12:39:24.369431] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.755 [2024-11-19 12:39:24.397184] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:25.755 [2024-11-19 12:39:26.460972] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:19:25.755 [2024-11-19 12:39:26.461117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.755 [2024-11-19 12:39:26.461143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.755 [2024-11-19 12:39:26.461161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.755 [2024-11-19 12:39:26.461174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.755 [2024-11-19 12:39:26.461188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.755 [2024-11-19 12:39:26.461201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.755 [2024-11-19 12:39:26.461214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.755 [2024-11-19 12:39:26.461227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.755 [2024-11-19 12:39:26.461240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:25.755 [2024-11-19 12:39:26.461289] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:25.755 [2024-11-19 12:39:26.461318] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x85dcc0 (9): Bad file descriptor 00:19:25.755 [2024-11-19 12:39:26.471805] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:25.755 Running I/O for 1 seconds... 00:19:25.755 7445.00 IOPS, 29.08 MiB/s 00:19:25.755 Latency(us) 00:19:25.755 [2024-11-19T12:39:31.015Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.755 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:25.755 Verification LBA range: start 0x0 length 0x4000 00:19:25.755 NVMe0n1 : 1.01 7512.70 29.35 0.00 0.00 16972.01 2174.60 14179.61 00:19:25.755 [2024-11-19T12:39:31.015Z] =================================================================================================================== 00:19:25.755 [2024-11-19T12:39:31.015Z] Total : 7512.70 29.35 0.00 0.00 16972.01 2174.60 14179.61 00:19:25.755 12:39:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:25.755 12:39:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:19:26.014 12:39:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:26.272 12:39:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:19:26.272 12:39:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:26.840 12:39:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:26.840 12:39:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:19:30.146 12:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:30.146 12:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:19:30.146 12:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 90858 00:19:30.146 12:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 90858 ']' 00:19:30.146 12:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 90858 00:19:30.146 12:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:19:30.146 12:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:30.146 12:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90858 00:19:30.146 killing process with pid 90858 00:19:30.146 12:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:30.146 12:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:30.146 12:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90858' 00:19:30.146 12:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 90858 00:19:30.146 12:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 90858 00:19:30.405 12:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:19:30.405 12:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:30.663 12:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:19:30.663 12:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:30.663 12:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:19:30.663 12:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # nvmfcleanup 00:19:30.663 12:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:19:30.663 12:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:30.663 12:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:19:30.663 12:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:30.663 12:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:30.663 rmmod nvme_tcp 00:19:30.663 rmmod nvme_fabrics 00:19:30.663 rmmod nvme_keyring 00:19:30.663 12:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:30.921 12:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:19:30.921 12:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:19:30.921 12:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@513 -- # '[' -n 90619 ']' 00:19:30.921 12:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # killprocess 90619 00:19:30.921 12:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 90619 ']' 00:19:30.921 12:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 90619 00:19:30.921 12:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:19:30.921 12:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:30.921 12:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90619 00:19:30.921 killing process with pid 90619 00:19:30.921 12:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:30.921 12:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:30.921 12:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90619' 00:19:30.921 12:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 90619 00:19:30.921 12:39:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 90619 00:19:30.921 12:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:19:30.921 12:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:19:30.921 12:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:19:30.921 12:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:19:30.921 12:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-save 00:19:30.921 12:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:19:30.921 12:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-restore 00:19:30.922 12:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:30.922 12:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:30.922 12:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:30.922 12:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:30.922 12:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:30.922 12:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:31.180 12:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:31.180 12:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:31.180 12:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:31.180 12:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:31.180 12:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:31.180 12:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:31.180 12:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:31.180 12:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:31.180 12:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:31.180 12:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:31.180 12:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.180 12:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:31.180 12:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:31.180 12:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:19:31.180 00:19:31.180 real 0m31.437s 00:19:31.180 user 2m1.434s 00:19:31.180 sys 0m5.354s 00:19:31.180 12:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:31.180 12:39:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:31.180 ************************************ 00:19:31.180 END TEST nvmf_failover 00:19:31.180 ************************************ 00:19:31.180 12:39:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:19:31.180 12:39:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:31.180 12:39:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:31.180 12:39:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.180 ************************************ 00:19:31.180 START TEST nvmf_host_discovery 00:19:31.180 ************************************ 00:19:31.180 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:19:31.439 * Looking for test storage... 00:19:31.439 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:31.439 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:31.439 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:31.439 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:19:31.439 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:31.439 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:31.439 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:31.439 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:31.439 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:19:31.439 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:19:31.439 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:19:31.439 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:19:31.439 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:19:31.439 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:19:31.439 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:19:31.439 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:31.439 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:19:31.439 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:19:31.439 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:31.439 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:31.439 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:19:31.439 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:19:31.439 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:31.439 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:19:31.439 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:19:31.439 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:19:31.439 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:19:31.439 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:31.439 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:19:31.439 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:19:31.439 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:31.439 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:31.439 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:19:31.439 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:31.439 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:31.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.439 --rc genhtml_branch_coverage=1 00:19:31.439 --rc genhtml_function_coverage=1 00:19:31.439 --rc genhtml_legend=1 00:19:31.439 --rc geninfo_all_blocks=1 00:19:31.439 --rc geninfo_unexecuted_blocks=1 00:19:31.439 00:19:31.439 ' 00:19:31.439 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:31.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.439 --rc genhtml_branch_coverage=1 00:19:31.439 --rc genhtml_function_coverage=1 00:19:31.439 --rc genhtml_legend=1 00:19:31.439 --rc geninfo_all_blocks=1 00:19:31.439 --rc geninfo_unexecuted_blocks=1 00:19:31.439 00:19:31.439 ' 00:19:31.439 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:31.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.439 --rc genhtml_branch_coverage=1 00:19:31.439 --rc genhtml_function_coverage=1 00:19:31.439 --rc genhtml_legend=1 00:19:31.439 --rc geninfo_all_blocks=1 00:19:31.439 --rc geninfo_unexecuted_blocks=1 00:19:31.439 00:19:31.439 ' 00:19:31.439 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:31.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.439 --rc genhtml_branch_coverage=1 00:19:31.439 --rc genhtml_function_coverage=1 00:19:31.439 --rc genhtml_legend=1 00:19:31.439 --rc geninfo_all_blocks=1 00:19:31.439 --rc geninfo_unexecuted_blocks=1 00:19:31.439 00:19:31.439 ' 00:19:31.439 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:31.439 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:19:31.439 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:31.439 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:31.439 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:31.439 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:31.440 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@456 -- # nvmf_veth_init 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:31.440 Cannot find device "nvmf_init_br" 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:31.440 Cannot find device "nvmf_init_br2" 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:31.440 Cannot find device "nvmf_tgt_br" 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:31.440 Cannot find device "nvmf_tgt_br2" 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:19:31.440 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:31.699 Cannot find device "nvmf_init_br" 00:19:31.699 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:19:31.699 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:31.699 Cannot find device "nvmf_init_br2" 00:19:31.699 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:19:31.699 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:31.699 Cannot find device "nvmf_tgt_br" 00:19:31.699 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:19:31.699 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:31.699 Cannot find device "nvmf_tgt_br2" 00:19:31.699 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:19:31.699 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:31.699 Cannot find device "nvmf_br" 00:19:31.699 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:19:31.699 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:31.699 Cannot find device "nvmf_init_if" 00:19:31.699 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:19:31.699 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:31.699 Cannot find device "nvmf_init_if2" 00:19:31.699 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:19:31.699 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:31.699 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:31.699 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:19:31.699 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:31.699 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:31.699 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:19:31.699 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:31.699 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:31.699 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:31.699 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:31.699 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:31.700 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:31.700 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:31.700 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:31.700 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:31.700 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:31.700 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:31.700 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:31.700 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:31.700 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:31.700 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:31.700 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:31.700 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:31.700 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:31.700 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:31.700 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:31.700 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:31.700 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:31.700 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:31.700 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:31.700 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:31.700 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:31.959 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:31.959 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:31.959 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:31.959 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:31.959 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:31.959 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:31.959 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:31.959 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:31.959 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:19:31.959 00:19:31.959 --- 10.0.0.3 ping statistics --- 00:19:31.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.959 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:19:31.959 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:31.959 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:31.959 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:19:31.959 00:19:31.959 --- 10.0.0.4 ping statistics --- 00:19:31.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.959 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:19:31.959 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:31.959 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:31.959 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:19:31.959 00:19:31.959 --- 10.0.0.1 ping statistics --- 00:19:31.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.959 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:19:31.959 12:39:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:31.959 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:31.959 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:19:31.959 00:19:31.959 --- 10.0.0.2 ping statistics --- 00:19:31.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.959 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:19:31.959 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:31.959 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@457 -- # return 0 00:19:31.959 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:19:31.959 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:31.959 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:19:31.959 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:19:31.959 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:31.959 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:19:31.959 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:19:31.959 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:19:31.959 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:31.959 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:31.959 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:31.959 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # nvmfpid=91252 00:19:31.959 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # waitforlisten 91252 00:19:31.959 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 91252 ']' 00:19:31.960 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.960 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:31.960 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.960 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:31.960 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:31.960 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:31.960 [2024-11-19 12:39:37.120955] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:19:31.960 [2024-11-19 12:39:37.121088] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:32.219 [2024-11-19 12:39:37.276565] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.219 [2024-11-19 12:39:37.317603] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:32.219 [2024-11-19 12:39:37.317678] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:32.219 [2024-11-19 12:39:37.317693] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:32.219 [2024-11-19 12:39:37.317704] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:32.219 [2024-11-19 12:39:37.317713] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:32.219 [2024-11-19 12:39:37.317744] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:32.219 [2024-11-19 12:39:37.351226] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:32.219 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:32.219 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:19:32.219 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:32.219 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:32.219 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:32.219 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:32.219 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:32.219 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.219 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:32.219 [2024-11-19 12:39:37.441612] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:32.219 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.219 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:19:32.219 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.219 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:32.219 [2024-11-19 12:39:37.449785] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:19:32.219 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.219 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:19:32.219 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.219 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:32.219 null0 00:19:32.219 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.219 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:19:32.219 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.219 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:32.219 null1 00:19:32.219 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.219 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:19:32.219 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.219 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:32.478 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.478 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=91278 00:19:32.478 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 91278 /tmp/host.sock 00:19:32.478 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:19:32.478 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 91278 ']' 00:19:32.478 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:19:32.478 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:32.478 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:32.478 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:32.478 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:32.478 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:32.478 [2024-11-19 12:39:37.541870] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:19:32.478 [2024-11-19 12:39:37.541971] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91278 ] 00:19:32.478 [2024-11-19 12:39:37.682303] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.478 [2024-11-19 12:39:37.723376] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.737 [2024-11-19 12:39:37.755767] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:32.737 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:32.737 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:19:32.737 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:32.737 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:19:32.738 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.738 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:32.738 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.738 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:19:32.738 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.738 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:32.738 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.738 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:19:32.738 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:19:32.738 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:32.738 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.738 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:32.738 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:32.738 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:32.738 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:32.738 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.738 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:19:32.738 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:19:32.738 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:32.738 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:32.738 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.738 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:32.738 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:32.738 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:32.738 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.738 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:19:32.738 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:19:32.738 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.738 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:32.738 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.738 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:19:32.738 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:32.738 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.738 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:32.738 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:32.738 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:32.738 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:32.738 12:39:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.998 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:19:32.998 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:19:32.998 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:32.998 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.998 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:32.998 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:32.998 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:32.998 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:32.998 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.998 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:19:32.998 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:19:32.998 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.998 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:32.998 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.998 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:19:32.998 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:32.998 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.998 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:32.998 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:32.998 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:32.998 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:32.998 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.998 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:19:32.998 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:19:32.998 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:32.998 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.998 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:32.998 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:32.998 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:32.998 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:32.998 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.998 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:19:32.998 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:32.998 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.998 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:32.998 [2024-11-19 12:39:38.185877] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:32.998 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.998 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:19:32.998 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:32.998 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.998 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:32.998 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:32.998 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:32.999 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:32.999 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.999 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:19:32.999 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:19:32.999 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:32.999 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:32.999 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.999 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:32.999 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:32.999 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:33.258 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.258 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:19:33.258 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:19:33.258 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:33.258 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:33.258 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:33.258 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:33.258 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:33.258 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:33.258 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:19:33.258 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:33.258 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.258 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:33.258 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:33.258 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.258 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:33.258 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:19:33.258 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:19:33.258 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:33.258 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:19:33.258 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.258 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:33.258 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.258 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:33.258 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:33.258 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:33.258 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:33.258 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:33.258 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:19:33.258 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:33.258 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:33.258 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:33.258 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.258 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:33.258 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:33.258 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.258 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:19:33.258 12:39:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:19:33.826 [2024-11-19 12:39:38.824528] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:33.826 [2024-11-19 12:39:38.824560] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:33.826 [2024-11-19 12:39:38.824580] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:33.826 [2024-11-19 12:39:38.830559] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:19:33.826 [2024-11-19 12:39:38.886988] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:33.826 [2024-11-19 12:39:38.887227] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:19:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:19:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:19:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:19:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:19:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:19:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:19:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:19:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:19:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:19:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:19:34.395 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:34.396 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:19:34.396 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:19:34.396 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:34.396 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:34.396 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:34.396 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:34.396 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:34.396 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:19:34.396 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:34.396 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:34.396 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.396 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:34.396 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.396 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:19:34.396 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:19:34.396 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:19:34.396 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:34.396 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:19:34.396 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.396 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:34.655 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.655 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:34.655 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:34.655 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:34.655 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:34.655 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:34.655 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:19:34.655 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:34.655 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:34.655 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.655 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:34.655 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:34.655 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:34.655 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.655 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:34.655 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:34.655 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:19:34.655 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:19:34.655 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:34.655 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:34.655 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:34.655 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:34.655 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:34.655 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:19:34.655 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:19:34.655 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.655 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:34.655 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:34.656 [2024-11-19 12:39:39.775510] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:34.656 [2024-11-19 12:39:39.775917] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:34.656 [2024-11-19 12:39:39.775944] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:34.656 [2024-11-19 12:39:39.781921] bdev_nvme.c:7086:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:34.656 [2024-11-19 12:39:39.840317] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:34.656 [2024-11-19 12:39:39.840341] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:34.656 [2024-11-19 12:39:39.840363] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:34.656 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.916 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:19:34.916 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:34.916 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:19:34.916 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:34.916 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:34.916 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:34.916 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:34.916 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:34.916 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:34.916 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:19:34.916 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:34.916 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:34.916 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.916 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:34.916 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.916 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:34.916 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:34.916 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:19:34.916 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:34.916 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:34.916 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.916 12:39:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:34.916 [2024-11-19 12:39:40.000223] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:34.916 [2024-11-19 12:39:40.000266] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:34.916 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.916 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:34.916 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:34.916 [2024-11-19 12:39:40.005040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:34.916 [2024-11-19 12:39:40.005069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.916 [2024-11-19 12:39:40.005082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:34.916 [2024-11-19 12:39:40.005091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.916 [2024-11-19 12:39:40.005101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:34.916 [2024-11-19 12:39:40.005110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.916 [2024-11-19 12:39:40.005119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:34.916 [2024-11-19 12:39:40.005128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.916 [2024-11-19 12:39:40.005137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b0740 is same with the state(6) to be set 00:19:34.916 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:34.916 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:34.916 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:34.916 [2024-11-19 12:39:40.006241] bdev_nvme.c:6949:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:19:34.916 [2024-11-19 12:39:40.006271] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:19:34.916 [2024-11-19 12:39:40.006326] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b0740 (9): Bad file descriptor 00:19:34.916 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:19:34.916 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:34.916 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:34.916 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.916 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:34.916 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:34.916 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:34.916 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.916 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.916 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:34.916 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:34.916 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:34.916 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:34.916 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:34.916 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:34.916 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:19:34.916 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:34.916 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:34.916 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.916 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:34.916 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:34.916 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:34.916 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.916 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:34.916 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:34.916 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:19:34.916 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:19:34.916 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:34.916 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:34.916 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:19:34.916 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:19:34.916 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:34.916 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.916 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:34.916 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:34.916 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:34.916 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:34.916 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.175 12:39:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:36.551 [2024-11-19 12:39:41.435328] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:36.551 [2024-11-19 12:39:41.435357] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:36.551 [2024-11-19 12:39:41.435391] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:36.551 [2024-11-19 12:39:41.441357] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:19:36.551 [2024-11-19 12:39:41.501878] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:36.551 [2024-11-19 12:39:41.501936] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:19:36.551 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.551 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:36.551 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:19:36.551 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:36.551 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:36.551 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:36.551 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:36.551 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:36.551 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:36.551 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.551 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:36.551 request: 00:19:36.551 { 00:19:36.551 "name": "nvme", 00:19:36.551 "trtype": "tcp", 00:19:36.551 "traddr": "10.0.0.3", 00:19:36.551 "adrfam": "ipv4", 00:19:36.551 "trsvcid": "8009", 00:19:36.551 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:36.551 "wait_for_attach": true, 00:19:36.551 "method": "bdev_nvme_start_discovery", 00:19:36.551 "req_id": 1 00:19:36.551 } 00:19:36.551 Got JSON-RPC error response 00:19:36.551 response: 00:19:36.551 { 00:19:36.551 "code": -17, 00:19:36.551 "message": "File exists" 00:19:36.551 } 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:36.552 request: 00:19:36.552 { 00:19:36.552 "name": "nvme_second", 00:19:36.552 "trtype": "tcp", 00:19:36.552 "traddr": "10.0.0.3", 00:19:36.552 "adrfam": "ipv4", 00:19:36.552 "trsvcid": "8009", 00:19:36.552 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:36.552 "wait_for_attach": true, 00:19:36.552 "method": "bdev_nvme_start_discovery", 00:19:36.552 "req_id": 1 00:19:36.552 } 00:19:36.552 Got JSON-RPC error response 00:19:36.552 response: 00:19:36.552 { 00:19:36.552 "code": -17, 00:19:36.552 "message": "File exists" 00:19:36.552 } 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.552 12:39:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.931 [2024-11-19 12:39:42.770646] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:37.931 [2024-11-19 12:39:42.770712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1929db0 with addr=10.0.0.3, port=8010 00:19:37.931 [2024-11-19 12:39:42.770730] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:19:37.931 [2024-11-19 12:39:42.770739] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:37.931 [2024-11-19 12:39:42.770747] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:19:38.868 [2024-11-19 12:39:43.770616] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:38.868 [2024-11-19 12:39:43.770666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191e4e0 with addr=10.0.0.3, port=8010 00:19:38.868 [2024-11-19 12:39:43.770704] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:19:38.868 [2024-11-19 12:39:43.770714] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:38.868 [2024-11-19 12:39:43.770722] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:19:39.806 [2024-11-19 12:39:44.770550] bdev_nvme.c:7205:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:19:39.806 request: 00:19:39.806 { 00:19:39.806 "name": "nvme_second", 00:19:39.806 "trtype": "tcp", 00:19:39.806 "traddr": "10.0.0.3", 00:19:39.806 "adrfam": "ipv4", 00:19:39.806 "trsvcid": "8010", 00:19:39.806 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:39.806 "wait_for_attach": false, 00:19:39.806 "attach_timeout_ms": 3000, 00:19:39.806 "method": "bdev_nvme_start_discovery", 00:19:39.806 "req_id": 1 00:19:39.806 } 00:19:39.806 Got JSON-RPC error response 00:19:39.806 response: 00:19:39.806 { 00:19:39.806 "code": -110, 00:19:39.806 "message": "Connection timed out" 00:19:39.806 } 00:19:39.806 12:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:39.806 12:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:19:39.806 12:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:39.806 12:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:39.806 12:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:39.806 12:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:19:39.806 12:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:39.806 12:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.806 12:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:39.806 12:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:39.806 12:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:39.806 12:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:39.806 12:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.806 12:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:19:39.806 12:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:19:39.806 12:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 91278 00:19:39.806 12:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:19:39.806 12:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:19:39.806 12:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:19:39.806 12:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:39.806 12:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:19:39.806 12:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:39.806 12:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:39.806 rmmod nvme_tcp 00:19:39.806 rmmod nvme_fabrics 00:19:39.806 rmmod nvme_keyring 00:19:39.806 12:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:39.806 12:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:19:39.806 12:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:19:39.806 12:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@513 -- # '[' -n 91252 ']' 00:19:39.806 12:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # killprocess 91252 00:19:39.806 12:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 91252 ']' 00:19:39.806 12:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 91252 00:19:39.806 12:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:19:39.806 12:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:39.806 12:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91252 00:19:39.806 12:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:39.806 killing process with pid 91252 00:19:39.807 12:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:39.807 12:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91252' 00:19:39.807 12:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 91252 00:19:39.807 12:39:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 91252 00:19:40.073 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:19:40.073 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:19:40.073 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:19:40.073 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:19:40.073 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-save 00:19:40.073 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-restore 00:19:40.073 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:19:40.073 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:40.073 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:40.073 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:40.073 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:40.073 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:40.073 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:40.073 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:40.073 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:40.073 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:40.073 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:40.073 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:40.073 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:40.073 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:40.073 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:40.073 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:40.073 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:40.073 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:40.073 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:40.073 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:19:40.344 00:19:40.344 real 0m8.916s 00:19:40.344 user 0m17.072s 00:19:40.344 sys 0m1.931s 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.344 ************************************ 00:19:40.344 END TEST nvmf_host_discovery 00:19:40.344 ************************************ 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.344 ************************************ 00:19:40.344 START TEST nvmf_host_multipath_status 00:19:40.344 ************************************ 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:19:40.344 * Looking for test storage... 00:19:40.344 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lcov --version 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:40.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.344 --rc genhtml_branch_coverage=1 00:19:40.344 --rc genhtml_function_coverage=1 00:19:40.344 --rc genhtml_legend=1 00:19:40.344 --rc geninfo_all_blocks=1 00:19:40.344 --rc geninfo_unexecuted_blocks=1 00:19:40.344 00:19:40.344 ' 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:40.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.344 --rc genhtml_branch_coverage=1 00:19:40.344 --rc genhtml_function_coverage=1 00:19:40.344 --rc genhtml_legend=1 00:19:40.344 --rc geninfo_all_blocks=1 00:19:40.344 --rc geninfo_unexecuted_blocks=1 00:19:40.344 00:19:40.344 ' 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:40.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.344 --rc genhtml_branch_coverage=1 00:19:40.344 --rc genhtml_function_coverage=1 00:19:40.344 --rc genhtml_legend=1 00:19:40.344 --rc geninfo_all_blocks=1 00:19:40.344 --rc geninfo_unexecuted_blocks=1 00:19:40.344 00:19:40.344 ' 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:40.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.344 --rc genhtml_branch_coverage=1 00:19:40.344 --rc genhtml_function_coverage=1 00:19:40.344 --rc genhtml_legend=1 00:19:40.344 --rc geninfo_all_blocks=1 00:19:40.344 --rc geninfo_unexecuted_blocks=1 00:19:40.344 00:19:40.344 ' 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:40.344 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:40.345 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # prepare_net_devs 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@434 -- # local -g is_hw=no 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # remove_spdk_ns 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # nvmf_veth_init 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:40.345 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:40.605 Cannot find device "nvmf_init_br" 00:19:40.605 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:19:40.605 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:40.605 Cannot find device "nvmf_init_br2" 00:19:40.605 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:19:40.605 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:40.605 Cannot find device "nvmf_tgt_br" 00:19:40.605 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:19:40.605 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:40.605 Cannot find device "nvmf_tgt_br2" 00:19:40.605 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:19:40.605 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:40.605 Cannot find device "nvmf_init_br" 00:19:40.605 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:19:40.605 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:40.605 Cannot find device "nvmf_init_br2" 00:19:40.605 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:19:40.605 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:40.605 Cannot find device "nvmf_tgt_br" 00:19:40.605 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:19:40.605 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:40.605 Cannot find device "nvmf_tgt_br2" 00:19:40.605 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:19:40.605 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:40.605 Cannot find device "nvmf_br" 00:19:40.605 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:19:40.605 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:40.605 Cannot find device "nvmf_init_if" 00:19:40.605 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:19:40.605 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:40.605 Cannot find device "nvmf_init_if2" 00:19:40.605 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:19:40.605 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:40.605 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:40.605 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:19:40.605 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:40.605 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:40.605 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:19:40.605 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:40.605 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:40.605 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:40.605 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:40.606 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:40.606 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:40.606 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:40.606 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:40.606 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:40.606 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:40.606 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:40.606 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:40.606 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:40.606 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:40.606 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:40.606 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:40.606 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:40.606 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:40.606 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:40.606 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:40.865 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:40.865 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:40.865 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:40.865 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:40.865 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:40.865 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:40.865 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:40.865 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:40.865 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:40.865 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:40.865 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:40.866 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:40.866 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:40.866 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:40.866 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:19:40.866 00:19:40.866 --- 10.0.0.3 ping statistics --- 00:19:40.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.866 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:19:40.866 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:40.866 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:40.866 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:19:40.866 00:19:40.866 --- 10.0.0.4 ping statistics --- 00:19:40.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.866 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:19:40.866 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:40.866 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:40.866 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:19:40.866 00:19:40.866 --- 10.0.0.1 ping statistics --- 00:19:40.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.866 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:19:40.866 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:40.866 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:40.866 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:19:40.866 00:19:40.866 --- 10.0.0.2 ping statistics --- 00:19:40.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.866 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:19:40.866 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:40.866 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # return 0 00:19:40.866 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:19:40.866 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:40.866 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:19:40.866 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:19:40.866 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:40.866 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:19:40.866 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:19:40.866 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:19:40.866 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:40.866 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:40.866 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:40.866 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # nvmfpid=91775 00:19:40.866 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:40.866 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # waitforlisten 91775 00:19:40.866 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 91775 ']' 00:19:40.866 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.866 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:40.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.866 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.866 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:40.866 12:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:40.866 [2024-11-19 12:39:46.060102] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:19:40.866 [2024-11-19 12:39:46.060191] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:41.125 [2024-11-19 12:39:46.196256] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:41.126 [2024-11-19 12:39:46.228075] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:41.126 [2024-11-19 12:39:46.228135] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:41.126 [2024-11-19 12:39:46.228159] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:41.126 [2024-11-19 12:39:46.228167] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:41.126 [2024-11-19 12:39:46.228174] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:41.126 [2024-11-19 12:39:46.229038] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:41.126 [2024-11-19 12:39:46.229089] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.126 [2024-11-19 12:39:46.256259] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:42.062 12:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:42.062 12:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:19:42.062 12:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:42.062 12:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:42.062 12:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:42.062 12:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:42.062 12:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=91775 00:19:42.062 12:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:42.321 [2024-11-19 12:39:47.331423] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:42.321 12:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:42.581 Malloc0 00:19:42.581 12:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:19:42.840 12:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:42.840 12:39:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:43.099 [2024-11-19 12:39:48.297586] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:43.099 12:39:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:43.358 [2024-11-19 12:39:48.513658] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:43.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:43.358 12:39:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=91831 00:19:43.358 12:39:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:19:43.358 12:39:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:43.358 12:39:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 91831 /var/tmp/bdevperf.sock 00:19:43.358 12:39:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 91831 ']' 00:19:43.358 12:39:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:43.358 12:39:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:43.358 12:39:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:43.358 12:39:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:43.358 12:39:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:44.296 12:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:44.296 12:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:19:44.296 12:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:44.554 12:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:19:44.813 Nvme0n1 00:19:44.813 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:45.382 Nvme0n1 00:19:45.382 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:19:45.382 12:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:19:47.287 12:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:19:47.287 12:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:19:47.546 12:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:47.804 12:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:19:48.742 12:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:19:48.742 12:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:48.742 12:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:48.742 12:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:49.001 12:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:49.001 12:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:49.001 12:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:49.001 12:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:49.261 12:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:49.261 12:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:49.261 12:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:49.261 12:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:49.520 12:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:49.520 12:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:49.520 12:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:49.520 12:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:49.780 12:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:49.780 12:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:49.780 12:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:49.780 12:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:50.040 12:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:50.040 12:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:50.040 12:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:50.040 12:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:50.300 12:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:50.300 12:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:19:50.300 12:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:50.559 12:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:50.819 12:39:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:19:51.755 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:19:51.755 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:51.755 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:51.755 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:52.326 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:52.326 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:52.326 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:52.326 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:52.588 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:52.588 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:52.588 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:52.588 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:52.588 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:52.588 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:52.588 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:52.588 12:39:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:52.847 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:52.848 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:52.848 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:52.848 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:53.106 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:53.106 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:53.106 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:53.106 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:53.365 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:53.365 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:19:53.365 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:53.624 12:39:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:19:53.884 12:39:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:19:54.836 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:19:54.836 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:54.836 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:54.836 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:55.405 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:55.405 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:55.405 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:55.405 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:55.405 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:55.405 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:55.405 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:55.405 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:55.665 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:55.665 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:55.665 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:55.665 12:40:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:55.924 12:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:55.924 12:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:55.924 12:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:55.924 12:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:56.493 12:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:56.493 12:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:56.493 12:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:56.493 12:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:56.493 12:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:56.493 12:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:19:56.493 12:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:56.753 12:40:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:57.012 12:40:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:19:58.390 12:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:19:58.390 12:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:58.390 12:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:58.390 12:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:58.390 12:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:58.390 12:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:58.390 12:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:58.390 12:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:58.648 12:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:58.648 12:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:58.648 12:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:58.648 12:40:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:58.907 12:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:58.907 12:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:58.907 12:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:58.907 12:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:59.166 12:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:59.166 12:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:59.166 12:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:59.166 12:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:59.426 12:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:59.426 12:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:59.426 12:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:59.426 12:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:59.686 12:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:59.686 12:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:19:59.686 12:40:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:19:59.945 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:00.204 12:40:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:20:01.142 12:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:20:01.142 12:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:01.142 12:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:01.142 12:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:01.401 12:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:01.401 12:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:01.401 12:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:01.401 12:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:01.660 12:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:01.660 12:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:01.660 12:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:01.660 12:40:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:01.919 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:01.919 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:01.919 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:01.919 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:02.178 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:02.178 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:20:02.178 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:02.178 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:02.437 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:02.437 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:02.437 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:02.437 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:02.696 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:02.696 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:20:02.696 12:40:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:20:02.955 12:40:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:03.214 12:40:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:20:04.151 12:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:20:04.151 12:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:04.151 12:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:04.151 12:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:04.718 12:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:04.718 12:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:04.718 12:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:04.718 12:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:04.718 12:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:04.718 12:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:04.718 12:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:04.718 12:40:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:05.287 12:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:05.287 12:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:05.287 12:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:05.287 12:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:05.287 12:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:05.287 12:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:20:05.287 12:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:05.287 12:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:05.546 12:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:05.546 12:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:05.546 12:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:05.546 12:40:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:05.805 12:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:05.805 12:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:20:06.064 12:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:20:06.064 12:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:20:06.323 12:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:06.582 12:40:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:20:07.962 12:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:20:07.962 12:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:07.962 12:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:07.962 12:40:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:07.962 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:07.962 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:07.963 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:07.963 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:08.222 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:08.222 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:08.222 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:08.222 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:08.481 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:08.481 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:08.481 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:08.481 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:08.741 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:08.741 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:08.741 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:08.741 12:40:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:09.000 12:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:09.000 12:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:09.000 12:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:09.000 12:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:09.259 12:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:09.259 12:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:20:09.259 12:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:09.518 12:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:09.778 12:40:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:20:10.715 12:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:20:10.715 12:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:10.715 12:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:10.715 12:40:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:10.974 12:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:10.974 12:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:10.974 12:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:10.974 12:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:11.233 12:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:11.233 12:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:11.233 12:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:11.233 12:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:11.492 12:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:11.492 12:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:11.492 12:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:11.492 12:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:11.751 12:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:11.751 12:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:11.751 12:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:11.751 12:40:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:12.011 12:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:12.011 12:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:12.011 12:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:12.011 12:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:12.269 12:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:12.269 12:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:20:12.269 12:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:12.528 12:40:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:20:12.788 12:40:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:20:14.166 12:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:20:14.167 12:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:14.167 12:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:14.167 12:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:14.167 12:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:14.167 12:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:14.167 12:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:14.167 12:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:14.425 12:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:14.426 12:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:14.426 12:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:14.426 12:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:14.685 12:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:14.685 12:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:14.685 12:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:14.685 12:40:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:14.944 12:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:14.944 12:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:14.944 12:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:14.944 12:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:15.512 12:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:15.512 12:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:15.512 12:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:15.512 12:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:15.512 12:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:15.512 12:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:20:15.512 12:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:15.771 12:40:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:16.030 12:40:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:20:17.457 12:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:20:17.457 12:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:17.457 12:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:17.457 12:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:17.457 12:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:17.457 12:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:17.457 12:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:17.457 12:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:17.726 12:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:17.726 12:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:17.726 12:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:17.726 12:40:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:18.004 12:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:18.004 12:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:18.004 12:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:18.004 12:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:18.271 12:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:18.271 12:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:18.271 12:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:18.271 12:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:18.530 12:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:18.530 12:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:18.530 12:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:18.530 12:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:18.792 12:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:18.792 12:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 91831 00:20:18.792 12:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 91831 ']' 00:20:18.792 12:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 91831 00:20:18.792 12:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:20:18.792 12:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:18.792 12:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91831 00:20:18.792 killing process with pid 91831 00:20:18.792 12:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:18.792 12:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:18.792 12:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91831' 00:20:18.792 12:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 91831 00:20:18.792 12:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 91831 00:20:18.792 { 00:20:18.792 "results": [ 00:20:18.792 { 00:20:18.792 "job": "Nvme0n1", 00:20:18.792 "core_mask": "0x4", 00:20:18.792 "workload": "verify", 00:20:18.792 "status": "terminated", 00:20:18.792 "verify_range": { 00:20:18.792 "start": 0, 00:20:18.792 "length": 16384 00:20:18.792 }, 00:20:18.792 "queue_depth": 128, 00:20:18.792 "io_size": 4096, 00:20:18.792 "runtime": 33.377429, 00:20:18.792 "iops": 9750.69110326023, 00:20:18.792 "mibps": 38.08863712211027, 00:20:18.792 "io_failed": 0, 00:20:18.792 "io_timeout": 0, 00:20:18.792 "avg_latency_us": 13100.125150404345, 00:20:18.792 "min_latency_us": 618.1236363636364, 00:20:18.792 "max_latency_us": 4026531.84 00:20:18.792 } 00:20:18.792 ], 00:20:18.792 "core_count": 1 00:20:18.792 } 00:20:18.792 12:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 91831 00:20:18.792 12:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:18.792 [2024-11-19 12:39:48.577448] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:20:18.792 [2024-11-19 12:39:48.577542] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91831 ] 00:20:18.792 [2024-11-19 12:39:48.708366] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.792 [2024-11-19 12:39:48.740586] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:20:18.792 [2024-11-19 12:39:48.768434] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:18.792 [2024-11-19 12:39:50.335795] bdev_nvme.c:5605:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:20:18.792 Running I/O for 90 seconds... 00:20:18.792 7957.00 IOPS, 31.08 MiB/s [2024-11-19T12:40:24.052Z] 8074.00 IOPS, 31.54 MiB/s [2024-11-19T12:40:24.052Z] 8912.67 IOPS, 34.82 MiB/s [2024-11-19T12:40:24.052Z] 9324.50 IOPS, 36.42 MiB/s [2024-11-19T12:40:24.052Z] 9541.20 IOPS, 37.27 MiB/s [2024-11-19T12:40:24.052Z] 9702.67 IOPS, 37.90 MiB/s [2024-11-19T12:40:24.052Z] 9830.71 IOPS, 38.40 MiB/s [2024-11-19T12:40:24.052Z] 9898.75 IOPS, 38.67 MiB/s [2024-11-19T12:40:24.052Z] 9980.56 IOPS, 38.99 MiB/s [2024-11-19T12:40:24.052Z] 10046.50 IOPS, 39.24 MiB/s [2024-11-19T12:40:24.052Z] 10090.27 IOPS, 39.42 MiB/s [2024-11-19T12:40:24.052Z] 10134.08 IOPS, 39.59 MiB/s [2024-11-19T12:40:24.053Z] 10187.15 IOPS, 39.79 MiB/s [2024-11-19T12:40:24.053Z] 10213.21 IOPS, 39.90 MiB/s [2024-11-19T12:40:24.053Z] [2024-11-19 12:40:05.058367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.793 [2024-11-19 12:40:05.058425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:18.793 [2024-11-19 12:40:05.058491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.793 [2024-11-19 12:40:05.058510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:18.793 [2024-11-19 12:40:05.058530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.793 [2024-11-19 12:40:05.058544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:18.793 [2024-11-19 12:40:05.058562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:13856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.793 [2024-11-19 12:40:05.058574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:18.793 [2024-11-19 12:40:05.058593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.793 [2024-11-19 12:40:05.058606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:18.793 [2024-11-19 12:40:05.058624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.793 [2024-11-19 12:40:05.058636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:18.793 [2024-11-19 12:40:05.058654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.793 [2024-11-19 12:40:05.058667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:18.793 [2024-11-19 12:40:05.058713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.793 [2024-11-19 12:40:05.058729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:18.793 [2024-11-19 12:40:05.058772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.793 [2024-11-19 12:40:05.058788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:18.793 [2024-11-19 12:40:05.058806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.793 [2024-11-19 12:40:05.058820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:18.793 [2024-11-19 12:40:05.058839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.793 [2024-11-19 12:40:05.058852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:18.793 [2024-11-19 12:40:05.058871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.793 [2024-11-19 12:40:05.058884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:18.793 [2024-11-19 12:40:05.058903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.793 [2024-11-19 12:40:05.058916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:18.793 [2024-11-19 12:40:05.058934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.793 [2024-11-19 12:40:05.058947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:18.793 [2024-11-19 12:40:05.058966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.793 [2024-11-19 12:40:05.058979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:18.793 [2024-11-19 12:40:05.058997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.793 [2024-11-19 12:40:05.059010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:18.793 [2024-11-19 12:40:05.059029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.793 [2024-11-19 12:40:05.059041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:18.793 [2024-11-19 12:40:05.059061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:13392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.793 [2024-11-19 12:40:05.059075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:18.793 [2024-11-19 12:40:05.059116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.793 [2024-11-19 12:40:05.059129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:18.793 [2024-11-19 12:40:05.059148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.793 [2024-11-19 12:40:05.059161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:18.793 [2024-11-19 12:40:05.059180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.793 [2024-11-19 12:40:05.059202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:18.793 [2024-11-19 12:40:05.059251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.793 [2024-11-19 12:40:05.059266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:18.793 [2024-11-19 12:40:05.059286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.793 [2024-11-19 12:40:05.059300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:18.793 [2024-11-19 12:40:05.059320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:13440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.793 [2024-11-19 12:40:05.059335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:18.793 [2024-11-19 12:40:05.059372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.793 [2024-11-19 12:40:05.059390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:18.793 [2024-11-19 12:40:05.059411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.793 [2024-11-19 12:40:05.059426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:18.793 [2024-11-19 12:40:05.059445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.793 [2024-11-19 12:40:05.059459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:18.793 [2024-11-19 12:40:05.059479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.793 [2024-11-19 12:40:05.059493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:18.793 [2024-11-19 12:40:05.059512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.793 [2024-11-19 12:40:05.059526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:18.793 [2024-11-19 12:40:05.059545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:14000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.793 [2024-11-19 12:40:05.059559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:18.793 [2024-11-19 12:40:05.059593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.793 [2024-11-19 12:40:05.059620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:18.793 [2024-11-19 12:40:05.059638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:14016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.794 [2024-11-19 12:40:05.059651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:18.794 [2024-11-19 12:40:05.059670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.794 [2024-11-19 12:40:05.059691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:18.794 [2024-11-19 12:40:05.059739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.794 [2024-11-19 12:40:05.059756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:18.794 [2024-11-19 12:40:05.059776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.794 [2024-11-19 12:40:05.059790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:18.794 [2024-11-19 12:40:05.059809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:14048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.794 [2024-11-19 12:40:05.059823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:18.794 [2024-11-19 12:40:05.059842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.794 [2024-11-19 12:40:05.059855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:18.794 [2024-11-19 12:40:05.059874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.794 [2024-11-19 12:40:05.059888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:18.794 [2024-11-19 12:40:05.059906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:14072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.794 [2024-11-19 12:40:05.059920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:18.794 [2024-11-19 12:40:05.059938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.794 [2024-11-19 12:40:05.059952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:18.794 [2024-11-19 12:40:05.059970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.794 [2024-11-19 12:40:05.059984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:18.794 [2024-11-19 12:40:05.060003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.794 [2024-11-19 12:40:05.060017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:18.794 [2024-11-19 12:40:05.060036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.794 [2024-11-19 12:40:05.060049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:18.794 [2024-11-19 12:40:05.060068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.794 [2024-11-19 12:40:05.060081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:18.794 [2024-11-19 12:40:05.060100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.794 [2024-11-19 12:40:05.060113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:18.794 [2024-11-19 12:40:05.060140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.794 [2024-11-19 12:40:05.060155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:18.794 [2024-11-19 12:40:05.060188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.794 [2024-11-19 12:40:05.060201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:18.794 [2024-11-19 12:40:05.060219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.794 [2024-11-19 12:40:05.060232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:18.794 [2024-11-19 12:40:05.060250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.794 [2024-11-19 12:40:05.060264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:18.794 [2024-11-19 12:40:05.060282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.794 [2024-11-19 12:40:05.060295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:18.794 [2024-11-19 12:40:05.060313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.794 [2024-11-19 12:40:05.060326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:18.794 [2024-11-19 12:40:05.060345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.794 [2024-11-19 12:40:05.060359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:18.794 [2024-11-19 12:40:05.060377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.794 [2024-11-19 12:40:05.060391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:18.794 [2024-11-19 12:40:05.060409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.794 [2024-11-19 12:40:05.060423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:18.794 [2024-11-19 12:40:05.060441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.794 [2024-11-19 12:40:05.060454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:18.794 [2024-11-19 12:40:05.060472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.794 [2024-11-19 12:40:05.060486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:18.794 [2024-11-19 12:40:05.060507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.794 [2024-11-19 12:40:05.060521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:18.794 [2024-11-19 12:40:05.060547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.794 [2024-11-19 12:40:05.060562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:18.794 [2024-11-19 12:40:05.060580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.794 [2024-11-19 12:40:05.060593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:18.794 [2024-11-19 12:40:05.060612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:14112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.794 [2024-11-19 12:40:05.060625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:18.794 [2024-11-19 12:40:05.060643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.794 [2024-11-19 12:40:05.060656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:18.794 [2024-11-19 12:40:05.060702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.794 [2024-11-19 12:40:05.060733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:18.794 [2024-11-19 12:40:05.060753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.794 [2024-11-19 12:40:05.060767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:18.794 [2024-11-19 12:40:05.060786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.794 [2024-11-19 12:40:05.060800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:18.794 [2024-11-19 12:40:05.060818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:14152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.794 [2024-11-19 12:40:05.060832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:18.794 [2024-11-19 12:40:05.060852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.794 [2024-11-19 12:40:05.060865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:18.795 [2024-11-19 12:40:05.060885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:14168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.795 [2024-11-19 12:40:05.060899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:18.795 [2024-11-19 12:40:05.060917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.795 [2024-11-19 12:40:05.060932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:18.795 [2024-11-19 12:40:05.060951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.795 [2024-11-19 12:40:05.060964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:18.795 [2024-11-19 12:40:05.060983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:14192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.795 [2024-11-19 12:40:05.061004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:18.795 [2024-11-19 12:40:05.061024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.795 [2024-11-19 12:40:05.061038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:18.795 [2024-11-19 12:40:05.061057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:14208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.795 [2024-11-19 12:40:05.061070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:18.795 [2024-11-19 12:40:05.061088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.795 [2024-11-19 12:40:05.061102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:18.795 [2024-11-19 12:40:05.061121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.795 [2024-11-19 12:40:05.061134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:18.795 [2024-11-19 12:40:05.061168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.795 [2024-11-19 12:40:05.061181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:18.795 [2024-11-19 12:40:05.061199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.795 [2024-11-19 12:40:05.061212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:18.795 [2024-11-19 12:40:05.061230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.795 [2024-11-19 12:40:05.061243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:18.795 [2024-11-19 12:40:05.061261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.795 [2024-11-19 12:40:05.061274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:18.795 [2024-11-19 12:40:05.061292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.795 [2024-11-19 12:40:05.061305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:18.795 [2024-11-19 12:40:05.061323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.795 [2024-11-19 12:40:05.061336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:18.795 [2024-11-19 12:40:05.061354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.795 [2024-11-19 12:40:05.061368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:18.795 [2024-11-19 12:40:05.061387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.795 [2024-11-19 12:40:05.061406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:18.795 [2024-11-19 12:40:05.061426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.795 [2024-11-19 12:40:05.061439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:18.795 [2024-11-19 12:40:05.061458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.795 [2024-11-19 12:40:05.061472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:18.795 [2024-11-19 12:40:05.061490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.795 [2024-11-19 12:40:05.061503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:18.795 [2024-11-19 12:40:05.061521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.795 [2024-11-19 12:40:05.061534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:18.795 [2024-11-19 12:40:05.061552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.795 [2024-11-19 12:40:05.061566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:18.795 [2024-11-19 12:40:05.061591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.795 [2024-11-19 12:40:05.061606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:18.795 [2024-11-19 12:40:05.061628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.795 [2024-11-19 12:40:05.061642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:18.795 [2024-11-19 12:40:05.061660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.795 [2024-11-19 12:40:05.061673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:18.795 [2024-11-19 12:40:05.061691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.795 [2024-11-19 12:40:05.061716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:18.795 [2024-11-19 12:40:05.061752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.795 [2024-11-19 12:40:05.061766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:18.795 [2024-11-19 12:40:05.061785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:14248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.795 [2024-11-19 12:40:05.061798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:18.795 [2024-11-19 12:40:05.061817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.795 [2024-11-19 12:40:05.061832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:18.795 [2024-11-19 12:40:05.061860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.795 [2024-11-19 12:40:05.061874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:18.795 [2024-11-19 12:40:05.061893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.795 [2024-11-19 12:40:05.061907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:18.795 [2024-11-19 12:40:05.061925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:14280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.795 [2024-11-19 12:40:05.061939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:18.795 [2024-11-19 12:40:05.061958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:14288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.795 [2024-11-19 12:40:05.061972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:18.795 [2024-11-19 12:40:05.061991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.795 [2024-11-19 12:40:05.062005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:18.795 [2024-11-19 12:40:05.062024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.795 [2024-11-19 12:40:05.062037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:18.795 [2024-11-19 12:40:05.062056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:14312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.796 [2024-11-19 12:40:05.062069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:18.796 [2024-11-19 12:40:05.062088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.796 [2024-11-19 12:40:05.062102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:18.796 [2024-11-19 12:40:05.062120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.796 [2024-11-19 12:40:05.062134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:18.796 [2024-11-19 12:40:05.062170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:14336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.796 [2024-11-19 12:40:05.062185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:18.796 [2024-11-19 12:40:05.062203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.796 [2024-11-19 12:40:05.062216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:18.796 [2024-11-19 12:40:05.062234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.796 [2024-11-19 12:40:05.062247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:18.796 [2024-11-19 12:40:05.062272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.796 [2024-11-19 12:40:05.062286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:18.796 [2024-11-19 12:40:05.062305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.796 [2024-11-19 12:40:05.062317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:18.796 [2024-11-19 12:40:05.062336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.796 [2024-11-19 12:40:05.062349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:18.796 [2024-11-19 12:40:05.062367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.796 [2024-11-19 12:40:05.062380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:18.796 [2024-11-19 12:40:05.062398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.796 [2024-11-19 12:40:05.062411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:18.796 [2024-11-19 12:40:05.062429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.796 [2024-11-19 12:40:05.062442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:18.796 [2024-11-19 12:40:05.062461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.796 [2024-11-19 12:40:05.062474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:18.796 [2024-11-19 12:40:05.062507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.796 [2024-11-19 12:40:05.062523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:18.796 [2024-11-19 12:40:05.062542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.796 [2024-11-19 12:40:05.062556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:18.796 [2024-11-19 12:40:05.062574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.796 [2024-11-19 12:40:05.062587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:18.796 [2024-11-19 12:40:05.062605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.796 [2024-11-19 12:40:05.062618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:18.796 [2024-11-19 12:40:05.062636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.796 [2024-11-19 12:40:05.062650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:18.796 [2024-11-19 12:40:05.062677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.796 [2024-11-19 12:40:05.062700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:18.796 [2024-11-19 12:40:05.063410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.796 [2024-11-19 12:40:05.063437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:18.796 [2024-11-19 12:40:05.063470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.796 [2024-11-19 12:40:05.063485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:18.796 [2024-11-19 12:40:05.063513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:14352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.796 [2024-11-19 12:40:05.063539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:18.796 [2024-11-19 12:40:05.063565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.796 [2024-11-19 12:40:05.063579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:18.796 [2024-11-19 12:40:05.063605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:14368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.796 [2024-11-19 12:40:05.063633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:18.796 [2024-11-19 12:40:05.063658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.796 [2024-11-19 12:40:05.063683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.796 [2024-11-19 12:40:05.063709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.796 [2024-11-19 12:40:05.063736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:18.796 [2024-11-19 12:40:05.063765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.796 [2024-11-19 12:40:05.063779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:18.796 [2024-11-19 12:40:05.063819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.796 [2024-11-19 12:40:05.063837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:18.796 9941.93 IOPS, 38.84 MiB/s [2024-11-19T12:40:24.056Z] 9320.56 IOPS, 36.41 MiB/s [2024-11-19T12:40:24.056Z] 8772.29 IOPS, 34.27 MiB/s [2024-11-19T12:40:24.056Z] 8284.94 IOPS, 32.36 MiB/s [2024-11-19T12:40:24.056Z] 8074.53 IOPS, 31.54 MiB/s [2024-11-19T12:40:24.056Z] 8187.15 IOPS, 31.98 MiB/s [2024-11-19T12:40:24.056Z] 8296.33 IOPS, 32.41 MiB/s [2024-11-19T12:40:24.056Z] 8556.95 IOPS, 33.43 MiB/s [2024-11-19T12:40:24.056Z] 8776.61 IOPS, 34.28 MiB/s [2024-11-19T12:40:24.056Z] 8978.29 IOPS, 35.07 MiB/s [2024-11-19T12:40:24.056Z] 9064.88 IOPS, 35.41 MiB/s [2024-11-19T12:40:24.056Z] 9117.46 IOPS, 35.62 MiB/s [2024-11-19T12:40:24.056Z] 9163.19 IOPS, 35.79 MiB/s [2024-11-19T12:40:24.056Z] 9273.14 IOPS, 36.22 MiB/s [2024-11-19T12:40:24.056Z] 9444.97 IOPS, 36.89 MiB/s [2024-11-19T12:40:24.056Z] 9590.53 IOPS, 37.46 MiB/s [2024-11-19T12:40:24.056Z] [2024-11-19 12:40:21.212636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.796 [2024-11-19 12:40:21.212703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:18.796 [2024-11-19 12:40:21.212796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.797 [2024-11-19 12:40:21.212817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:18.797 [2024-11-19 12:40:21.212838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.797 [2024-11-19 12:40:21.212851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:18.797 [2024-11-19 12:40:21.212870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.797 [2024-11-19 12:40:21.212883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:18.797 [2024-11-19 12:40:21.214259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.797 [2024-11-19 12:40:21.214294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:18.797 [2024-11-19 12:40:21.214338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.797 [2024-11-19 12:40:21.214353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:18.797 [2024-11-19 12:40:21.214372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.797 [2024-11-19 12:40:21.214384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:18.797 [2024-11-19 12:40:21.214403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.797 [2024-11-19 12:40:21.214416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:18.797 [2024-11-19 12:40:21.214434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.797 [2024-11-19 12:40:21.214447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:18.797 [2024-11-19 12:40:21.214465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.797 [2024-11-19 12:40:21.214478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:18.797 [2024-11-19 12:40:21.214497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.797 [2024-11-19 12:40:21.214510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:18.797 [2024-11-19 12:40:21.214528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.797 [2024-11-19 12:40:21.214541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:18.797 [2024-11-19 12:40:21.214559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.797 [2024-11-19 12:40:21.214572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:18.797 [2024-11-19 12:40:21.214606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.797 [2024-11-19 12:40:21.214622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:18.797 [2024-11-19 12:40:21.214641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.797 [2024-11-19 12:40:21.214654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:18.797 [2024-11-19 12:40:21.214688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.797 [2024-11-19 12:40:21.214714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:18.797 [2024-11-19 12:40:21.214735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.797 [2024-11-19 12:40:21.214750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:18.797 [2024-11-19 12:40:21.214769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.797 [2024-11-19 12:40:21.214783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:18.797 [2024-11-19 12:40:21.214802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.797 [2024-11-19 12:40:21.214816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:18.797 [2024-11-19 12:40:21.214835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.797 [2024-11-19 12:40:21.214849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:18.797 [2024-11-19 12:40:21.214867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.797 [2024-11-19 12:40:21.214881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:18.797 [2024-11-19 12:40:21.214900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.797 [2024-11-19 12:40:21.214914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:18.797 [2024-11-19 12:40:21.214933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.797 [2024-11-19 12:40:21.214946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:18.797 [2024-11-19 12:40:21.214965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.797 [2024-11-19 12:40:21.214978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:18.797 [2024-11-19 12:40:21.214997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.797 [2024-11-19 12:40:21.215010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:18.797 [2024-11-19 12:40:21.215029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.797 [2024-11-19 12:40:21.215052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:18.797 [2024-11-19 12:40:21.215087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.797 [2024-11-19 12:40:21.215100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:18.797 [2024-11-19 12:40:21.215119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.797 [2024-11-19 12:40:21.215132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.797 [2024-11-19 12:40:21.215150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.797 [2024-11-19 12:40:21.215163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:18.797 [2024-11-19 12:40:21.215182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.797 [2024-11-19 12:40:21.215195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:18.797 [2024-11-19 12:40:21.215223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.798 [2024-11-19 12:40:21.215255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:18.798 [2024-11-19 12:40:21.215275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.798 [2024-11-19 12:40:21.215290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:18.798 [2024-11-19 12:40:21.215309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.798 [2024-11-19 12:40:21.215323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:18.798 [2024-11-19 12:40:21.215343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.798 [2024-11-19 12:40:21.215358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:18.798 [2024-11-19 12:40:21.215378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.798 [2024-11-19 12:40:21.215391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:18.798 [2024-11-19 12:40:21.215411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.798 [2024-11-19 12:40:21.215425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:18.798 [2024-11-19 12:40:21.215445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.798 [2024-11-19 12:40:21.215458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:18.798 [2024-11-19 12:40:21.215478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.798 [2024-11-19 12:40:21.215500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:18.798 [2024-11-19 12:40:21.215539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.798 [2024-11-19 12:40:21.215572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:18.798 [2024-11-19 12:40:21.215606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.798 [2024-11-19 12:40:21.215620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:18.798 [2024-11-19 12:40:21.215638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.798 [2024-11-19 12:40:21.215651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:18.798 [2024-11-19 12:40:21.215670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.798 [2024-11-19 12:40:21.215683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:18.798 [2024-11-19 12:40:21.215702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.798 [2024-11-19 12:40:21.215728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:18.798 [2024-11-19 12:40:21.215749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.798 [2024-11-19 12:40:21.215763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:18.798 [2024-11-19 12:40:21.215782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.798 [2024-11-19 12:40:21.215795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:18.798 [2024-11-19 12:40:21.215814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.798 [2024-11-19 12:40:21.215826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:18.798 [2024-11-19 12:40:21.215845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.798 [2024-11-19 12:40:21.215858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:18.798 [2024-11-19 12:40:21.215877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.798 [2024-11-19 12:40:21.215889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:18.798 [2024-11-19 12:40:21.215908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.798 [2024-11-19 12:40:21.215921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:18.798 [2024-11-19 12:40:21.215939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:18.798 [2024-11-19 12:40:21.215955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:18.798 9698.00 IOPS, 37.88 MiB/s [2024-11-19T12:40:24.058Z] 9727.94 IOPS, 38.00 MiB/s [2024-11-19T12:40:24.058Z] 9747.33 IOPS, 38.08 MiB/s [2024-11-19T12:40:24.058Z] Received shutdown signal, test time was about 33.378175 seconds 00:20:18.798 00:20:18.798 Latency(us) 00:20:18.798 [2024-11-19T12:40:24.058Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.798 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:18.798 Verification LBA range: start 0x0 length 0x4000 00:20:18.798 Nvme0n1 : 33.38 9750.69 38.09 0.00 0.00 13100.13 618.12 4026531.84 00:20:18.798 [2024-11-19T12:40:24.058Z] =================================================================================================================== 00:20:18.798 [2024-11-19T12:40:24.058Z] Total : 9750.69 38.09 0.00 0.00 13100.13 618.12 4026531.84 00:20:18.798 [2024-11-19 12:40:23.850145] app.c:1032:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:20:18.798 12:40:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:19.057 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:20:19.057 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:19.057 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:20:19.057 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # nvmfcleanup 00:20:19.057 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:20:19.316 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:19.316 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:20:19.317 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:19.317 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:19.317 rmmod nvme_tcp 00:20:19.317 rmmod nvme_fabrics 00:20:19.317 rmmod nvme_keyring 00:20:19.317 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:19.317 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:20:19.317 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:20:19.317 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@513 -- # '[' -n 91775 ']' 00:20:19.317 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # killprocess 91775 00:20:19.317 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 91775 ']' 00:20:19.317 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 91775 00:20:19.317 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:20:19.317 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:19.317 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91775 00:20:19.317 killing process with pid 91775 00:20:19.317 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:19.317 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:19.317 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91775' 00:20:19.317 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 91775 00:20:19.317 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 91775 00:20:19.317 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:20:19.317 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:20:19.317 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:20:19.317 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:20:19.317 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:20:19.317 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-save 00:20:19.317 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-restore 00:20:19.317 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:19.317 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:19.317 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:19.576 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:19.576 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:19.576 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:19.576 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:19.576 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:19.576 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:19.576 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:19.576 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:19.576 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:19.576 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:19.576 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:19.576 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:19.576 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:19.576 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:19.576 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:19.576 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:19.576 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:20:19.576 00:20:19.576 real 0m39.415s 00:20:19.576 user 2m7.705s 00:20:19.576 sys 0m10.727s 00:20:19.576 ************************************ 00:20:19.576 END TEST nvmf_host_multipath_status 00:20:19.576 ************************************ 00:20:19.576 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:19.576 12:40:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:19.837 12:40:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:20:19.837 12:40:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:19.837 12:40:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:19.837 12:40:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.837 ************************************ 00:20:19.837 START TEST nvmf_discovery_remove_ifc 00:20:19.837 ************************************ 00:20:19.837 12:40:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:20:19.837 * Looking for test storage... 00:20:19.837 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:19.837 12:40:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:19.837 12:40:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lcov --version 00:20:19.837 12:40:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:19.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.837 --rc genhtml_branch_coverage=1 00:20:19.837 --rc genhtml_function_coverage=1 00:20:19.837 --rc genhtml_legend=1 00:20:19.837 --rc geninfo_all_blocks=1 00:20:19.837 --rc geninfo_unexecuted_blocks=1 00:20:19.837 00:20:19.837 ' 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:19.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.837 --rc genhtml_branch_coverage=1 00:20:19.837 --rc genhtml_function_coverage=1 00:20:19.837 --rc genhtml_legend=1 00:20:19.837 --rc geninfo_all_blocks=1 00:20:19.837 --rc geninfo_unexecuted_blocks=1 00:20:19.837 00:20:19.837 ' 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:19.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.837 --rc genhtml_branch_coverage=1 00:20:19.837 --rc genhtml_function_coverage=1 00:20:19.837 --rc genhtml_legend=1 00:20:19.837 --rc geninfo_all_blocks=1 00:20:19.837 --rc geninfo_unexecuted_blocks=1 00:20:19.837 00:20:19.837 ' 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:19.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.837 --rc genhtml_branch_coverage=1 00:20:19.837 --rc genhtml_function_coverage=1 00:20:19.837 --rc genhtml_legend=1 00:20:19.837 --rc geninfo_all_blocks=1 00:20:19.837 --rc geninfo_unexecuted_blocks=1 00:20:19.837 00:20:19.837 ' 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.837 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:19.838 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # prepare_net_devs 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@434 -- # local -g is_hw=no 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # remove_spdk_ns 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@456 -- # nvmf_veth_init 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:19.838 Cannot find device "nvmf_init_br" 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:20:19.838 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:20.097 Cannot find device "nvmf_init_br2" 00:20:20.098 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:20:20.098 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:20.098 Cannot find device "nvmf_tgt_br" 00:20:20.098 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:20:20.098 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:20.098 Cannot find device "nvmf_tgt_br2" 00:20:20.098 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:20:20.098 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:20.098 Cannot find device "nvmf_init_br" 00:20:20.098 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:20:20.098 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:20.098 Cannot find device "nvmf_init_br2" 00:20:20.098 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:20:20.098 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:20.098 Cannot find device "nvmf_tgt_br" 00:20:20.098 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:20:20.098 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:20.098 Cannot find device "nvmf_tgt_br2" 00:20:20.098 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:20:20.098 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:20.098 Cannot find device "nvmf_br" 00:20:20.098 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:20:20.098 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:20.098 Cannot find device "nvmf_init_if" 00:20:20.098 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:20:20.098 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:20.098 Cannot find device "nvmf_init_if2" 00:20:20.098 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:20:20.098 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:20.098 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:20.098 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:20:20.098 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:20.098 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:20.098 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:20:20.098 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:20.098 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:20.098 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:20.098 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:20.098 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:20.098 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:20.098 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:20.098 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:20.098 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:20.098 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:20.098 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:20.098 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:20.098 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:20.098 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:20.098 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:20.358 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:20.358 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:20.358 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:20.358 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:20.358 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:20.358 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:20.358 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:20.358 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:20.358 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:20.358 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:20.358 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:20.358 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:20.358 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:20.358 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:20.358 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:20.358 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:20.358 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:20.358 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:20.358 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:20.358 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:20:20.358 00:20:20.358 --- 10.0.0.3 ping statistics --- 00:20:20.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.358 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:20:20.358 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:20.358 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:20.358 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.029 ms 00:20:20.358 00:20:20.358 --- 10.0.0.4 ping statistics --- 00:20:20.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.358 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:20:20.358 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:20.358 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:20.358 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:20:20.358 00:20:20.358 --- 10.0.0.1 ping statistics --- 00:20:20.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.358 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:20:20.358 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:20.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:20.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:20:20.358 00:20:20.358 --- 10.0.0.2 ping statistics --- 00:20:20.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.358 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:20:20.358 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:20.358 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@457 -- # return 0 00:20:20.358 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:20:20.358 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:20.358 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:20:20.358 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:20:20.358 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:20.358 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:20:20.358 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:20:20.358 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:20:20.358 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:20:20.358 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:20.358 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:20.358 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # nvmfpid=92668 00:20:20.358 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # waitforlisten 92668 00:20:20.358 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:20.358 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 92668 ']' 00:20:20.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.358 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.358 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:20.358 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.358 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:20.358 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:20.358 [2024-11-19 12:40:25.576580] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:20:20.358 [2024-11-19 12:40:25.576697] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:20.618 [2024-11-19 12:40:25.716647] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.618 [2024-11-19 12:40:25.751467] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:20.618 [2024-11-19 12:40:25.751844] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:20.618 [2024-11-19 12:40:25.751880] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:20.618 [2024-11-19 12:40:25.751888] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:20.618 [2024-11-19 12:40:25.751896] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:20.618 [2024-11-19 12:40:25.751924] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:20.618 [2024-11-19 12:40:25.779375] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:20.618 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:20.618 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:20:20.618 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:20:20.618 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:20.618 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:20.618 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:20.618 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:20:20.618 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.618 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:20.878 [2024-11-19 12:40:25.887300] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:20.878 [2024-11-19 12:40:25.895465] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:20:20.878 null0 00:20:20.878 [2024-11-19 12:40:25.927353] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:20.878 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:20:20.878 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.878 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=92691 00:20:20.878 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:20:20.878 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 92691 /tmp/host.sock 00:20:20.878 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 92691 ']' 00:20:20.878 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:20:20.878 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:20.878 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:20:20.878 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:20.878 12:40:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:20.878 [2024-11-19 12:40:26.010504] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:20:20.878 [2024-11-19 12:40:26.010988] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92691 ] 00:20:21.137 [2024-11-19 12:40:26.154801] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.137 [2024-11-19 12:40:26.195792] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.138 12:40:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:21.138 12:40:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:20:21.138 12:40:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:21.138 12:40:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:20:21.138 12:40:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.138 12:40:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:21.138 12:40:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.138 12:40:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:20:21.138 12:40:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.138 12:40:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:21.138 [2024-11-19 12:40:26.288534] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:21.138 12:40:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.138 12:40:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:20:21.138 12:40:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.138 12:40:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:22.075 [2024-11-19 12:40:27.330575] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:22.340 [2024-11-19 12:40:27.330797] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:22.340 [2024-11-19 12:40:27.330832] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:22.340 [2024-11-19 12:40:27.336630] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:20:22.340 [2024-11-19 12:40:27.393152] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:20:22.340 [2024-11-19 12:40:27.393353] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:20:22.340 [2024-11-19 12:40:27.393418] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:20:22.340 [2024-11-19 12:40:27.393533] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:20:22.340 [2024-11-19 12:40:27.393604] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:22.340 12:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.340 12:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:20:22.340 12:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:22.340 [2024-11-19 12:40:27.399049] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1cde290 was disconnected and freed. delete nvme_qpair. 00:20:22.340 12:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:22.340 12:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:22.340 12:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.340 12:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:22.340 12:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:22.340 12:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:22.340 12:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.340 12:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:20:22.340 12:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:20:22.340 12:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:20:22.340 12:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:20:22.340 12:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:22.340 12:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:22.340 12:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:22.340 12:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:22.340 12:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:22.340 12:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.340 12:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:22.340 12:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.340 12:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:22.340 12:40:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:23.720 12:40:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:23.720 12:40:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:23.720 12:40:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:23.720 12:40:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.720 12:40:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:23.720 12:40:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:23.720 12:40:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:23.720 12:40:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.720 12:40:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:23.720 12:40:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:24.655 12:40:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:24.655 12:40:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:24.655 12:40:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:24.655 12:40:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:24.655 12:40:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.655 12:40:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:24.655 12:40:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:24.655 12:40:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.655 12:40:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:24.655 12:40:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:25.592 12:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:25.592 12:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:25.592 12:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:25.592 12:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:25.592 12:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.592 12:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:25.592 12:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:25.592 12:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.592 12:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:25.592 12:40:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:26.529 12:40:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:26.529 12:40:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:26.529 12:40:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:26.529 12:40:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:26.529 12:40:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.529 12:40:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:26.529 12:40:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:26.529 12:40:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.529 12:40:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:26.529 12:40:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:27.908 12:40:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:27.908 12:40:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:27.908 12:40:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:27.908 12:40:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:27.908 12:40:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.908 12:40:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:27.908 12:40:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:27.908 12:40:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.908 [2024-11-19 12:40:32.821787] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:20:27.908 [2024-11-19 12:40:32.821857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.908 [2024-11-19 12:40:32.821872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.908 [2024-11-19 12:40:32.821883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.908 [2024-11-19 12:40:32.821892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.908 [2024-11-19 12:40:32.821901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.908 [2024-11-19 12:40:32.821909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.908 [2024-11-19 12:40:32.821918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.908 [2024-11-19 12:40:32.821926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.908 [2024-11-19 12:40:32.821935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.908 [2024-11-19 12:40:32.821943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.908 [2024-11-19 12:40:32.821951] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb9d00 is same with the state(6) to be set 00:20:27.908 [2024-11-19 12:40:32.831782] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb9d00 (9): Bad file descriptor 00:20:27.908 [2024-11-19 12:40:32.841801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:27.908 12:40:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:27.908 12:40:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:28.845 12:40:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:28.845 12:40:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:28.845 12:40:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:28.845 12:40:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:28.845 12:40:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.845 12:40:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:28.845 12:40:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:28.845 [2024-11-19 12:40:33.889745] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:20:28.845 [2024-11-19 12:40:33.889975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cb9d00 with addr=10.0.0.3, port=4420 00:20:28.845 [2024-11-19 12:40:33.890005] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb9d00 is same with the state(6) to be set 00:20:28.845 [2024-11-19 12:40:33.890041] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb9d00 (9): Bad file descriptor 00:20:28.845 [2024-11-19 12:40:33.890662] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:28.845 [2024-11-19 12:40:33.890763] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:28.845 [2024-11-19 12:40:33.890784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:28.845 [2024-11-19 12:40:33.890804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:28.845 [2024-11-19 12:40:33.890848] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:28.845 [2024-11-19 12:40:33.890866] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:28.845 12:40:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.845 12:40:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:28.845 12:40:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:29.782 [2024-11-19 12:40:34.890909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:29.782 [2024-11-19 12:40:34.891092] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:29.782 [2024-11-19 12:40:34.891126] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:29.782 [2024-11-19 12:40:34.891137] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:20:29.782 [2024-11-19 12:40:34.891158] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:29.782 [2024-11-19 12:40:34.891183] bdev_nvme.c:6913:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:20:29.782 [2024-11-19 12:40:34.891217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.782 [2024-11-19 12:40:34.891255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.782 [2024-11-19 12:40:34.891285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.782 [2024-11-19 12:40:34.891295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.782 [2024-11-19 12:40:34.891306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.782 [2024-11-19 12:40:34.891315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.782 [2024-11-19 12:40:34.891325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.782 [2024-11-19 12:40:34.891337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.782 [2024-11-19 12:40:34.891347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.782 [2024-11-19 12:40:34.891357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.782 [2024-11-19 12:40:34.891366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:20:29.782 [2024-11-19 12:40:34.891385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca82a0 (9): Bad file descriptor 00:20:29.782 [2024-11-19 12:40:34.892196] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:20:29.782 [2024-11-19 12:40:34.892242] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:20:29.782 12:40:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:29.782 12:40:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:29.782 12:40:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:29.782 12:40:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.782 12:40:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:29.782 12:40:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:29.782 12:40:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:29.782 12:40:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.782 12:40:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:20:29.782 12:40:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:29.782 12:40:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:29.782 12:40:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:20:29.782 12:40:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:29.782 12:40:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:29.782 12:40:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:29.782 12:40:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:29.782 12:40:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:29.782 12:40:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.782 12:40:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:29.782 12:40:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.041 12:40:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:20:30.041 12:40:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:30.978 12:40:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:30.978 12:40:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:30.978 12:40:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:30.978 12:40:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:30.978 12:40:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.978 12:40:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:30.978 12:40:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:30.978 12:40:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.978 12:40:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:20:30.978 12:40:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:31.916 [2024-11-19 12:40:36.904348] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:31.916 [2024-11-19 12:40:36.904369] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:31.916 [2024-11-19 12:40:36.904384] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:31.916 [2024-11-19 12:40:36.910381] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:20:31.916 [2024-11-19 12:40:36.966089] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:20:31.916 [2024-11-19 12:40:36.966261] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:20:31.916 [2024-11-19 12:40:36.966320] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:20:31.916 [2024-11-19 12:40:36.966421] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:20:31.916 [2024-11-19 12:40:36.966543] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:31.916 [2024-11-19 12:40:36.973085] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1c95920 was disconnected and freed. delete nvme_qpair. 00:20:31.916 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:31.916 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:31.916 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:31.916 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:31.916 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.916 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:31.916 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:31.916 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.916 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:20:31.916 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:20:31.916 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 92691 00:20:31.916 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 92691 ']' 00:20:31.916 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 92691 00:20:31.916 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:20:32.176 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:32.176 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92691 00:20:32.176 killing process with pid 92691 00:20:32.176 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:32.176 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:32.176 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92691' 00:20:32.176 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 92691 00:20:32.176 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 92691 00:20:32.176 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:20:32.176 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # nvmfcleanup 00:20:32.176 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:20:32.176 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:32.176 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:20:32.176 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:32.176 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:32.176 rmmod nvme_tcp 00:20:32.176 rmmod nvme_fabrics 00:20:32.176 rmmod nvme_keyring 00:20:32.436 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:32.436 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:20:32.436 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:20:32.436 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@513 -- # '[' -n 92668 ']' 00:20:32.436 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # killprocess 92668 00:20:32.436 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 92668 ']' 00:20:32.436 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 92668 00:20:32.436 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:20:32.436 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:32.436 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92668 00:20:32.436 killing process with pid 92668 00:20:32.436 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:32.436 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:32.436 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92668' 00:20:32.436 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 92668 00:20:32.436 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 92668 00:20:32.436 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:20:32.436 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:20:32.436 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:20:32.436 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:20:32.436 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-save 00:20:32.436 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:20:32.436 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-restore 00:20:32.436 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:32.436 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:32.436 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:32.436 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:32.436 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:32.436 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:32.696 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:32.696 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:32.696 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:32.696 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:32.696 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:32.696 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:32.696 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:32.696 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:32.696 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:32.696 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:32.696 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:32.696 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:32.696 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:32.696 ************************************ 00:20:32.696 END TEST nvmf_discovery_remove_ifc 00:20:32.696 ************************************ 00:20:32.696 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:20:32.696 00:20:32.696 real 0m13.024s 00:20:32.696 user 0m22.064s 00:20:32.696 sys 0m2.428s 00:20:32.696 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:32.696 12:40:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:32.696 12:40:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:20:32.696 12:40:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:32.696 12:40:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:32.696 12:40:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.696 ************************************ 00:20:32.696 START TEST nvmf_identify_kernel_target 00:20:32.696 ************************************ 00:20:32.696 12:40:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:20:32.957 * Looking for test storage... 00:20:32.957 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lcov --version 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:32.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:32.957 --rc genhtml_branch_coverage=1 00:20:32.957 --rc genhtml_function_coverage=1 00:20:32.957 --rc genhtml_legend=1 00:20:32.957 --rc geninfo_all_blocks=1 00:20:32.957 --rc geninfo_unexecuted_blocks=1 00:20:32.957 00:20:32.957 ' 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:32.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:32.957 --rc genhtml_branch_coverage=1 00:20:32.957 --rc genhtml_function_coverage=1 00:20:32.957 --rc genhtml_legend=1 00:20:32.957 --rc geninfo_all_blocks=1 00:20:32.957 --rc geninfo_unexecuted_blocks=1 00:20:32.957 00:20:32.957 ' 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:32.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:32.957 --rc genhtml_branch_coverage=1 00:20:32.957 --rc genhtml_function_coverage=1 00:20:32.957 --rc genhtml_legend=1 00:20:32.957 --rc geninfo_all_blocks=1 00:20:32.957 --rc geninfo_unexecuted_blocks=1 00:20:32.957 00:20:32.957 ' 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:32.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:32.957 --rc genhtml_branch_coverage=1 00:20:32.957 --rc genhtml_function_coverage=1 00:20:32.957 --rc genhtml_legend=1 00:20:32.957 --rc geninfo_all_blocks=1 00:20:32.957 --rc geninfo_unexecuted_blocks=1 00:20:32.957 00:20:32.957 ' 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.957 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:32.958 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # nvmf_veth_init 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:32.958 Cannot find device "nvmf_init_br" 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:32.958 Cannot find device "nvmf_init_br2" 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:32.958 Cannot find device "nvmf_tgt_br" 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:32.958 Cannot find device "nvmf_tgt_br2" 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:32.958 Cannot find device "nvmf_init_br" 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:32.958 Cannot find device "nvmf_init_br2" 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:20:32.958 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:33.218 Cannot find device "nvmf_tgt_br" 00:20:33.218 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:20:33.218 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:33.218 Cannot find device "nvmf_tgt_br2" 00:20:33.218 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:20:33.218 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:33.218 Cannot find device "nvmf_br" 00:20:33.218 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:20:33.218 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:33.218 Cannot find device "nvmf_init_if" 00:20:33.218 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:20:33.218 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:33.218 Cannot find device "nvmf_init_if2" 00:20:33.218 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:20:33.218 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:33.218 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:33.218 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:20:33.218 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:33.218 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:33.218 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:20:33.218 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:33.218 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:33.218 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:33.218 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:33.218 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:33.218 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:33.218 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:33.218 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:33.218 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:33.218 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:33.218 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:33.218 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:33.218 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:33.218 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:33.218 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:33.218 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:33.218 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:33.218 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:33.218 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:33.218 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:33.218 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:33.218 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:33.218 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:33.218 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:33.218 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:33.478 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:33.478 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:33.478 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:33.478 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:33.478 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:33.478 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:33.478 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:33.478 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:33.478 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:33.478 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:20:33.478 00:20:33.478 --- 10.0.0.3 ping statistics --- 00:20:33.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.478 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:20:33.478 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:33.478 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:33.478 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:20:33.478 00:20:33.478 --- 10.0.0.4 ping statistics --- 00:20:33.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.478 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:20:33.478 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:33.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:33.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:20:33.478 00:20:33.478 --- 10.0.0.1 ping statistics --- 00:20:33.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.478 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:20:33.478 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:33.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:33.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:20:33.478 00:20:33.478 --- 10.0.0.2 ping statistics --- 00:20:33.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.478 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:20:33.478 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:33.478 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # return 0 00:20:33.478 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:20:33.478 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:33.478 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:20:33.478 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:20:33.478 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:33.478 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:20:33.478 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:20:33.478 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:20:33.478 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:20:33.478 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@765 -- # local ip 00:20:33.478 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:33.478 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:33.478 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:33.478 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:33.479 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:33.479 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:33.479 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:33.479 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:33.479 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:33.479 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:20:33.479 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:20:33.479 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:20:33.479 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:20:33.479 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:33.479 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:33.479 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:33.479 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # local block nvme 00:20:33.479 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:20:33.479 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@666 -- # modprobe nvmet 00:20:33.479 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:33.479 12:40:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:33.738 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:33.738 Waiting for block devices as requested 00:20:33.738 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:33.998 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:33.998 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:20:33.998 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:33.998 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:20:33.998 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:20:33.998 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:33.998 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:33.998 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:20:33.998 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:20:33.998 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:33.998 No valid GPT data, bailing 00:20:33.998 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:33.998 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:33.998 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:33.998 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:20:33.998 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:20:33.998 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:33.998 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n2 00:20:33.998 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:20:33.998 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:33.998 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:33.998 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n2 00:20:33.998 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:20:33.998 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:34.257 No valid GPT data, bailing 00:20:34.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:34.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:34.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:34.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n2 00:20:34.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:20:34.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:34.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n3 00:20:34.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:20:34.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:34.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:34.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n3 00:20:34.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:20:34.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:34.257 No valid GPT data, bailing 00:20:34.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:34.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:34.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:34.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n3 00:20:34.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:20:34.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:34.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme1n1 00:20:34.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:20:34.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:34.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:34.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme1n1 00:20:34.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:20:34.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:34.257 No valid GPT data, bailing 00:20:34.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:34.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:34.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:34.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme1n1 00:20:34.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # [[ -b /dev/nvme1n1 ]] 00:20:34.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:34.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:34.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:34.257 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:20:34.258 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo 1 00:20:34.258 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@692 -- # echo /dev/nvme1n1 00:20:34.258 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:20:34.258 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:20:34.258 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo tcp 00:20:34.258 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 4420 00:20:34.258 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo ipv4 00:20:34.258 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:34.258 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid=bae1b18f-cc14-461e-aa63-e888be1a2cc9 -a 10.0.0.1 -t tcp -s 4420 00:20:34.517 00:20:34.517 Discovery Log Number of Records 2, Generation counter 2 00:20:34.517 =====Discovery Log Entry 0====== 00:20:34.517 trtype: tcp 00:20:34.517 adrfam: ipv4 00:20:34.517 subtype: current discovery subsystem 00:20:34.517 treq: not specified, sq flow control disable supported 00:20:34.517 portid: 1 00:20:34.517 trsvcid: 4420 00:20:34.517 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:34.517 traddr: 10.0.0.1 00:20:34.517 eflags: none 00:20:34.517 sectype: none 00:20:34.517 =====Discovery Log Entry 1====== 00:20:34.517 trtype: tcp 00:20:34.517 adrfam: ipv4 00:20:34.517 subtype: nvme subsystem 00:20:34.517 treq: not specified, sq flow control disable supported 00:20:34.517 portid: 1 00:20:34.517 trsvcid: 4420 00:20:34.517 subnqn: nqn.2016-06.io.spdk:testnqn 00:20:34.517 traddr: 10.0.0.1 00:20:34.517 eflags: none 00:20:34.517 sectype: none 00:20:34.517 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:20:34.517 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:20:34.517 ===================================================== 00:20:34.517 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:34.517 ===================================================== 00:20:34.517 Controller Capabilities/Features 00:20:34.517 ================================ 00:20:34.517 Vendor ID: 0000 00:20:34.517 Subsystem Vendor ID: 0000 00:20:34.517 Serial Number: ab521f725506a3a1a923 00:20:34.517 Model Number: Linux 00:20:34.517 Firmware Version: 6.8.9-20 00:20:34.517 Recommended Arb Burst: 0 00:20:34.517 IEEE OUI Identifier: 00 00 00 00:20:34.517 Multi-path I/O 00:20:34.517 May have multiple subsystem ports: No 00:20:34.517 May have multiple controllers: No 00:20:34.517 Associated with SR-IOV VF: No 00:20:34.517 Max Data Transfer Size: Unlimited 00:20:34.517 Max Number of Namespaces: 0 00:20:34.517 Max Number of I/O Queues: 1024 00:20:34.517 NVMe Specification Version (VS): 1.3 00:20:34.518 NVMe Specification Version (Identify): 1.3 00:20:34.518 Maximum Queue Entries: 1024 00:20:34.518 Contiguous Queues Required: No 00:20:34.518 Arbitration Mechanisms Supported 00:20:34.518 Weighted Round Robin: Not Supported 00:20:34.518 Vendor Specific: Not Supported 00:20:34.518 Reset Timeout: 7500 ms 00:20:34.518 Doorbell Stride: 4 bytes 00:20:34.518 NVM Subsystem Reset: Not Supported 00:20:34.518 Command Sets Supported 00:20:34.518 NVM Command Set: Supported 00:20:34.518 Boot Partition: Not Supported 00:20:34.518 Memory Page Size Minimum: 4096 bytes 00:20:34.518 Memory Page Size Maximum: 4096 bytes 00:20:34.518 Persistent Memory Region: Not Supported 00:20:34.518 Optional Asynchronous Events Supported 00:20:34.518 Namespace Attribute Notices: Not Supported 00:20:34.518 Firmware Activation Notices: Not Supported 00:20:34.518 ANA Change Notices: Not Supported 00:20:34.518 PLE Aggregate Log Change Notices: Not Supported 00:20:34.518 LBA Status Info Alert Notices: Not Supported 00:20:34.518 EGE Aggregate Log Change Notices: Not Supported 00:20:34.518 Normal NVM Subsystem Shutdown event: Not Supported 00:20:34.518 Zone Descriptor Change Notices: Not Supported 00:20:34.518 Discovery Log Change Notices: Supported 00:20:34.518 Controller Attributes 00:20:34.518 128-bit Host Identifier: Not Supported 00:20:34.518 Non-Operational Permissive Mode: Not Supported 00:20:34.518 NVM Sets: Not Supported 00:20:34.518 Read Recovery Levels: Not Supported 00:20:34.518 Endurance Groups: Not Supported 00:20:34.518 Predictable Latency Mode: Not Supported 00:20:34.518 Traffic Based Keep ALive: Not Supported 00:20:34.518 Namespace Granularity: Not Supported 00:20:34.518 SQ Associations: Not Supported 00:20:34.518 UUID List: Not Supported 00:20:34.518 Multi-Domain Subsystem: Not Supported 00:20:34.518 Fixed Capacity Management: Not Supported 00:20:34.518 Variable Capacity Management: Not Supported 00:20:34.518 Delete Endurance Group: Not Supported 00:20:34.518 Delete NVM Set: Not Supported 00:20:34.518 Extended LBA Formats Supported: Not Supported 00:20:34.518 Flexible Data Placement Supported: Not Supported 00:20:34.518 00:20:34.518 Controller Memory Buffer Support 00:20:34.518 ================================ 00:20:34.518 Supported: No 00:20:34.518 00:20:34.518 Persistent Memory Region Support 00:20:34.518 ================================ 00:20:34.518 Supported: No 00:20:34.518 00:20:34.518 Admin Command Set Attributes 00:20:34.518 ============================ 00:20:34.518 Security Send/Receive: Not Supported 00:20:34.518 Format NVM: Not Supported 00:20:34.518 Firmware Activate/Download: Not Supported 00:20:34.518 Namespace Management: Not Supported 00:20:34.518 Device Self-Test: Not Supported 00:20:34.518 Directives: Not Supported 00:20:34.518 NVMe-MI: Not Supported 00:20:34.518 Virtualization Management: Not Supported 00:20:34.518 Doorbell Buffer Config: Not Supported 00:20:34.518 Get LBA Status Capability: Not Supported 00:20:34.518 Command & Feature Lockdown Capability: Not Supported 00:20:34.518 Abort Command Limit: 1 00:20:34.518 Async Event Request Limit: 1 00:20:34.518 Number of Firmware Slots: N/A 00:20:34.518 Firmware Slot 1 Read-Only: N/A 00:20:34.518 Firmware Activation Without Reset: N/A 00:20:34.518 Multiple Update Detection Support: N/A 00:20:34.518 Firmware Update Granularity: No Information Provided 00:20:34.518 Per-Namespace SMART Log: No 00:20:34.518 Asymmetric Namespace Access Log Page: Not Supported 00:20:34.518 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:34.518 Command Effects Log Page: Not Supported 00:20:34.518 Get Log Page Extended Data: Supported 00:20:34.518 Telemetry Log Pages: Not Supported 00:20:34.518 Persistent Event Log Pages: Not Supported 00:20:34.518 Supported Log Pages Log Page: May Support 00:20:34.518 Commands Supported & Effects Log Page: Not Supported 00:20:34.518 Feature Identifiers & Effects Log Page:May Support 00:20:34.518 NVMe-MI Commands & Effects Log Page: May Support 00:20:34.518 Data Area 4 for Telemetry Log: Not Supported 00:20:34.518 Error Log Page Entries Supported: 1 00:20:34.518 Keep Alive: Not Supported 00:20:34.518 00:20:34.518 NVM Command Set Attributes 00:20:34.518 ========================== 00:20:34.518 Submission Queue Entry Size 00:20:34.518 Max: 1 00:20:34.518 Min: 1 00:20:34.518 Completion Queue Entry Size 00:20:34.518 Max: 1 00:20:34.518 Min: 1 00:20:34.518 Number of Namespaces: 0 00:20:34.518 Compare Command: Not Supported 00:20:34.518 Write Uncorrectable Command: Not Supported 00:20:34.518 Dataset Management Command: Not Supported 00:20:34.518 Write Zeroes Command: Not Supported 00:20:34.518 Set Features Save Field: Not Supported 00:20:34.518 Reservations: Not Supported 00:20:34.518 Timestamp: Not Supported 00:20:34.518 Copy: Not Supported 00:20:34.518 Volatile Write Cache: Not Present 00:20:34.518 Atomic Write Unit (Normal): 1 00:20:34.518 Atomic Write Unit (PFail): 1 00:20:34.518 Atomic Compare & Write Unit: 1 00:20:34.518 Fused Compare & Write: Not Supported 00:20:34.518 Scatter-Gather List 00:20:34.518 SGL Command Set: Supported 00:20:34.518 SGL Keyed: Not Supported 00:20:34.518 SGL Bit Bucket Descriptor: Not Supported 00:20:34.518 SGL Metadata Pointer: Not Supported 00:20:34.518 Oversized SGL: Not Supported 00:20:34.518 SGL Metadata Address: Not Supported 00:20:34.518 SGL Offset: Supported 00:20:34.518 Transport SGL Data Block: Not Supported 00:20:34.518 Replay Protected Memory Block: Not Supported 00:20:34.518 00:20:34.518 Firmware Slot Information 00:20:34.518 ========================= 00:20:34.518 Active slot: 0 00:20:34.518 00:20:34.518 00:20:34.518 Error Log 00:20:34.518 ========= 00:20:34.518 00:20:34.518 Active Namespaces 00:20:34.518 ================= 00:20:34.518 Discovery Log Page 00:20:34.518 ================== 00:20:34.518 Generation Counter: 2 00:20:34.518 Number of Records: 2 00:20:34.518 Record Format: 0 00:20:34.518 00:20:34.518 Discovery Log Entry 0 00:20:34.518 ---------------------- 00:20:34.518 Transport Type: 3 (TCP) 00:20:34.518 Address Family: 1 (IPv4) 00:20:34.518 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:34.518 Entry Flags: 00:20:34.518 Duplicate Returned Information: 0 00:20:34.518 Explicit Persistent Connection Support for Discovery: 0 00:20:34.518 Transport Requirements: 00:20:34.518 Secure Channel: Not Specified 00:20:34.518 Port ID: 1 (0x0001) 00:20:34.518 Controller ID: 65535 (0xffff) 00:20:34.518 Admin Max SQ Size: 32 00:20:34.518 Transport Service Identifier: 4420 00:20:34.518 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:34.518 Transport Address: 10.0.0.1 00:20:34.518 Discovery Log Entry 1 00:20:34.518 ---------------------- 00:20:34.518 Transport Type: 3 (TCP) 00:20:34.518 Address Family: 1 (IPv4) 00:20:34.518 Subsystem Type: 2 (NVM Subsystem) 00:20:34.518 Entry Flags: 00:20:34.518 Duplicate Returned Information: 0 00:20:34.518 Explicit Persistent Connection Support for Discovery: 0 00:20:34.518 Transport Requirements: 00:20:34.518 Secure Channel: Not Specified 00:20:34.518 Port ID: 1 (0x0001) 00:20:34.518 Controller ID: 65535 (0xffff) 00:20:34.518 Admin Max SQ Size: 32 00:20:34.518 Transport Service Identifier: 4420 00:20:34.518 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:20:34.518 Transport Address: 10.0.0.1 00:20:34.518 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:34.778 get_feature(0x01) failed 00:20:34.778 get_feature(0x02) failed 00:20:34.779 get_feature(0x04) failed 00:20:34.779 ===================================================== 00:20:34.779 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:34.779 ===================================================== 00:20:34.779 Controller Capabilities/Features 00:20:34.779 ================================ 00:20:34.779 Vendor ID: 0000 00:20:34.779 Subsystem Vendor ID: 0000 00:20:34.779 Serial Number: 02851322589e4ef07ac2 00:20:34.779 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:20:34.779 Firmware Version: 6.8.9-20 00:20:34.779 Recommended Arb Burst: 6 00:20:34.779 IEEE OUI Identifier: 00 00 00 00:20:34.779 Multi-path I/O 00:20:34.779 May have multiple subsystem ports: Yes 00:20:34.779 May have multiple controllers: Yes 00:20:34.779 Associated with SR-IOV VF: No 00:20:34.779 Max Data Transfer Size: Unlimited 00:20:34.779 Max Number of Namespaces: 1024 00:20:34.779 Max Number of I/O Queues: 128 00:20:34.779 NVMe Specification Version (VS): 1.3 00:20:34.779 NVMe Specification Version (Identify): 1.3 00:20:34.779 Maximum Queue Entries: 1024 00:20:34.779 Contiguous Queues Required: No 00:20:34.779 Arbitration Mechanisms Supported 00:20:34.779 Weighted Round Robin: Not Supported 00:20:34.779 Vendor Specific: Not Supported 00:20:34.779 Reset Timeout: 7500 ms 00:20:34.779 Doorbell Stride: 4 bytes 00:20:34.779 NVM Subsystem Reset: Not Supported 00:20:34.779 Command Sets Supported 00:20:34.779 NVM Command Set: Supported 00:20:34.779 Boot Partition: Not Supported 00:20:34.779 Memory Page Size Minimum: 4096 bytes 00:20:34.779 Memory Page Size Maximum: 4096 bytes 00:20:34.779 Persistent Memory Region: Not Supported 00:20:34.779 Optional Asynchronous Events Supported 00:20:34.779 Namespace Attribute Notices: Supported 00:20:34.779 Firmware Activation Notices: Not Supported 00:20:34.779 ANA Change Notices: Supported 00:20:34.779 PLE Aggregate Log Change Notices: Not Supported 00:20:34.779 LBA Status Info Alert Notices: Not Supported 00:20:34.779 EGE Aggregate Log Change Notices: Not Supported 00:20:34.779 Normal NVM Subsystem Shutdown event: Not Supported 00:20:34.779 Zone Descriptor Change Notices: Not Supported 00:20:34.779 Discovery Log Change Notices: Not Supported 00:20:34.779 Controller Attributes 00:20:34.779 128-bit Host Identifier: Supported 00:20:34.779 Non-Operational Permissive Mode: Not Supported 00:20:34.779 NVM Sets: Not Supported 00:20:34.779 Read Recovery Levels: Not Supported 00:20:34.779 Endurance Groups: Not Supported 00:20:34.779 Predictable Latency Mode: Not Supported 00:20:34.779 Traffic Based Keep ALive: Supported 00:20:34.779 Namespace Granularity: Not Supported 00:20:34.779 SQ Associations: Not Supported 00:20:34.779 UUID List: Not Supported 00:20:34.779 Multi-Domain Subsystem: Not Supported 00:20:34.779 Fixed Capacity Management: Not Supported 00:20:34.779 Variable Capacity Management: Not Supported 00:20:34.779 Delete Endurance Group: Not Supported 00:20:34.779 Delete NVM Set: Not Supported 00:20:34.779 Extended LBA Formats Supported: Not Supported 00:20:34.779 Flexible Data Placement Supported: Not Supported 00:20:34.779 00:20:34.779 Controller Memory Buffer Support 00:20:34.779 ================================ 00:20:34.779 Supported: No 00:20:34.779 00:20:34.779 Persistent Memory Region Support 00:20:34.779 ================================ 00:20:34.779 Supported: No 00:20:34.779 00:20:34.779 Admin Command Set Attributes 00:20:34.779 ============================ 00:20:34.779 Security Send/Receive: Not Supported 00:20:34.779 Format NVM: Not Supported 00:20:34.779 Firmware Activate/Download: Not Supported 00:20:34.779 Namespace Management: Not Supported 00:20:34.779 Device Self-Test: Not Supported 00:20:34.779 Directives: Not Supported 00:20:34.779 NVMe-MI: Not Supported 00:20:34.779 Virtualization Management: Not Supported 00:20:34.779 Doorbell Buffer Config: Not Supported 00:20:34.779 Get LBA Status Capability: Not Supported 00:20:34.779 Command & Feature Lockdown Capability: Not Supported 00:20:34.779 Abort Command Limit: 4 00:20:34.779 Async Event Request Limit: 4 00:20:34.779 Number of Firmware Slots: N/A 00:20:34.779 Firmware Slot 1 Read-Only: N/A 00:20:34.779 Firmware Activation Without Reset: N/A 00:20:34.779 Multiple Update Detection Support: N/A 00:20:34.779 Firmware Update Granularity: No Information Provided 00:20:34.779 Per-Namespace SMART Log: Yes 00:20:34.779 Asymmetric Namespace Access Log Page: Supported 00:20:34.779 ANA Transition Time : 10 sec 00:20:34.779 00:20:34.779 Asymmetric Namespace Access Capabilities 00:20:34.779 ANA Optimized State : Supported 00:20:34.779 ANA Non-Optimized State : Supported 00:20:34.779 ANA Inaccessible State : Supported 00:20:34.779 ANA Persistent Loss State : Supported 00:20:34.779 ANA Change State : Supported 00:20:34.779 ANAGRPID is not changed : No 00:20:34.779 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:20:34.779 00:20:34.779 ANA Group Identifier Maximum : 128 00:20:34.779 Number of ANA Group Identifiers : 128 00:20:34.779 Max Number of Allowed Namespaces : 1024 00:20:34.779 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:20:34.779 Command Effects Log Page: Supported 00:20:34.779 Get Log Page Extended Data: Supported 00:20:34.779 Telemetry Log Pages: Not Supported 00:20:34.779 Persistent Event Log Pages: Not Supported 00:20:34.779 Supported Log Pages Log Page: May Support 00:20:34.779 Commands Supported & Effects Log Page: Not Supported 00:20:34.779 Feature Identifiers & Effects Log Page:May Support 00:20:34.779 NVMe-MI Commands & Effects Log Page: May Support 00:20:34.779 Data Area 4 for Telemetry Log: Not Supported 00:20:34.779 Error Log Page Entries Supported: 128 00:20:34.779 Keep Alive: Supported 00:20:34.779 Keep Alive Granularity: 1000 ms 00:20:34.779 00:20:34.779 NVM Command Set Attributes 00:20:34.779 ========================== 00:20:34.779 Submission Queue Entry Size 00:20:34.779 Max: 64 00:20:34.779 Min: 64 00:20:34.779 Completion Queue Entry Size 00:20:34.779 Max: 16 00:20:34.779 Min: 16 00:20:34.779 Number of Namespaces: 1024 00:20:34.779 Compare Command: Not Supported 00:20:34.779 Write Uncorrectable Command: Not Supported 00:20:34.779 Dataset Management Command: Supported 00:20:34.779 Write Zeroes Command: Supported 00:20:34.779 Set Features Save Field: Not Supported 00:20:34.779 Reservations: Not Supported 00:20:34.779 Timestamp: Not Supported 00:20:34.779 Copy: Not Supported 00:20:34.779 Volatile Write Cache: Present 00:20:34.779 Atomic Write Unit (Normal): 1 00:20:34.779 Atomic Write Unit (PFail): 1 00:20:34.779 Atomic Compare & Write Unit: 1 00:20:34.779 Fused Compare & Write: Not Supported 00:20:34.779 Scatter-Gather List 00:20:34.779 SGL Command Set: Supported 00:20:34.779 SGL Keyed: Not Supported 00:20:34.779 SGL Bit Bucket Descriptor: Not Supported 00:20:34.779 SGL Metadata Pointer: Not Supported 00:20:34.779 Oversized SGL: Not Supported 00:20:34.779 SGL Metadata Address: Not Supported 00:20:34.779 SGL Offset: Supported 00:20:34.779 Transport SGL Data Block: Not Supported 00:20:34.779 Replay Protected Memory Block: Not Supported 00:20:34.779 00:20:34.779 Firmware Slot Information 00:20:34.779 ========================= 00:20:34.779 Active slot: 0 00:20:34.779 00:20:34.779 Asymmetric Namespace Access 00:20:34.779 =========================== 00:20:34.779 Change Count : 0 00:20:34.779 Number of ANA Group Descriptors : 1 00:20:34.779 ANA Group Descriptor : 0 00:20:34.779 ANA Group ID : 1 00:20:34.779 Number of NSID Values : 1 00:20:34.779 Change Count : 0 00:20:34.779 ANA State : 1 00:20:34.779 Namespace Identifier : 1 00:20:34.779 00:20:34.779 Commands Supported and Effects 00:20:34.779 ============================== 00:20:34.779 Admin Commands 00:20:34.779 -------------- 00:20:34.780 Get Log Page (02h): Supported 00:20:34.780 Identify (06h): Supported 00:20:34.780 Abort (08h): Supported 00:20:34.780 Set Features (09h): Supported 00:20:34.780 Get Features (0Ah): Supported 00:20:34.780 Asynchronous Event Request (0Ch): Supported 00:20:34.780 Keep Alive (18h): Supported 00:20:34.780 I/O Commands 00:20:34.780 ------------ 00:20:34.780 Flush (00h): Supported 00:20:34.780 Write (01h): Supported LBA-Change 00:20:34.780 Read (02h): Supported 00:20:34.780 Write Zeroes (08h): Supported LBA-Change 00:20:34.780 Dataset Management (09h): Supported 00:20:34.780 00:20:34.780 Error Log 00:20:34.780 ========= 00:20:34.780 Entry: 0 00:20:34.780 Error Count: 0x3 00:20:34.780 Submission Queue Id: 0x0 00:20:34.780 Command Id: 0x5 00:20:34.780 Phase Bit: 0 00:20:34.780 Status Code: 0x2 00:20:34.780 Status Code Type: 0x0 00:20:34.780 Do Not Retry: 1 00:20:34.780 Error Location: 0x28 00:20:34.780 LBA: 0x0 00:20:34.780 Namespace: 0x0 00:20:34.780 Vendor Log Page: 0x0 00:20:34.780 ----------- 00:20:34.780 Entry: 1 00:20:34.780 Error Count: 0x2 00:20:34.780 Submission Queue Id: 0x0 00:20:34.780 Command Id: 0x5 00:20:34.780 Phase Bit: 0 00:20:34.780 Status Code: 0x2 00:20:34.780 Status Code Type: 0x0 00:20:34.780 Do Not Retry: 1 00:20:34.780 Error Location: 0x28 00:20:34.780 LBA: 0x0 00:20:34.780 Namespace: 0x0 00:20:34.780 Vendor Log Page: 0x0 00:20:34.780 ----------- 00:20:34.780 Entry: 2 00:20:34.780 Error Count: 0x1 00:20:34.780 Submission Queue Id: 0x0 00:20:34.780 Command Id: 0x4 00:20:34.780 Phase Bit: 0 00:20:34.780 Status Code: 0x2 00:20:34.780 Status Code Type: 0x0 00:20:34.780 Do Not Retry: 1 00:20:34.780 Error Location: 0x28 00:20:34.780 LBA: 0x0 00:20:34.780 Namespace: 0x0 00:20:34.780 Vendor Log Page: 0x0 00:20:34.780 00:20:34.780 Number of Queues 00:20:34.780 ================ 00:20:34.780 Number of I/O Submission Queues: 128 00:20:34.780 Number of I/O Completion Queues: 128 00:20:34.780 00:20:34.780 ZNS Specific Controller Data 00:20:34.780 ============================ 00:20:34.780 Zone Append Size Limit: 0 00:20:34.780 00:20:34.780 00:20:34.780 Active Namespaces 00:20:34.780 ================= 00:20:34.780 get_feature(0x05) failed 00:20:34.780 Namespace ID:1 00:20:34.780 Command Set Identifier: NVM (00h) 00:20:34.780 Deallocate: Supported 00:20:34.780 Deallocated/Unwritten Error: Not Supported 00:20:34.780 Deallocated Read Value: Unknown 00:20:34.780 Deallocate in Write Zeroes: Not Supported 00:20:34.780 Deallocated Guard Field: 0xFFFF 00:20:34.780 Flush: Supported 00:20:34.780 Reservation: Not Supported 00:20:34.780 Namespace Sharing Capabilities: Multiple Controllers 00:20:34.780 Size (in LBAs): 1310720 (5GiB) 00:20:34.780 Capacity (in LBAs): 1310720 (5GiB) 00:20:34.780 Utilization (in LBAs): 1310720 (5GiB) 00:20:34.780 UUID: a881ad88-5f15-4914-abb1-82d89ab67a28 00:20:34.780 Thin Provisioning: Not Supported 00:20:34.780 Per-NS Atomic Units: Yes 00:20:34.780 Atomic Boundary Size (Normal): 0 00:20:34.780 Atomic Boundary Size (PFail): 0 00:20:34.780 Atomic Boundary Offset: 0 00:20:34.780 NGUID/EUI64 Never Reused: No 00:20:34.780 ANA group ID: 1 00:20:34.780 Namespace Write Protected: No 00:20:34.780 Number of LBA Formats: 1 00:20:34.780 Current LBA Format: LBA Format #00 00:20:34.780 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:20:34.780 00:20:34.780 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:20:34.780 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:20:34.780 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:20:34.780 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:34.780 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:20:34.780 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:34.780 12:40:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:34.780 rmmod nvme_tcp 00:20:34.780 rmmod nvme_fabrics 00:20:34.780 12:40:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:34.780 12:40:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:20:34.780 12:40:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:20:34.780 12:40:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:20:34.780 12:40:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:20:34.780 12:40:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:20:34.780 12:40:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:20:34.780 12:40:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:20:34.780 12:40:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-save 00:20:34.780 12:40:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:20:34.780 12:40:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-restore 00:20:34.780 12:40:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:34.780 12:40:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:34.780 12:40:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:34.780 12:40:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:35.040 12:40:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:35.040 12:40:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:35.040 12:40:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:35.040 12:40:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:35.040 12:40:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:35.040 12:40:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:35.040 12:40:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:35.040 12:40:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:35.040 12:40:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:35.040 12:40:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:35.040 12:40:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:35.040 12:40:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:35.040 12:40:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.040 12:40:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:35.040 12:40:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.040 12:40:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:20:35.040 12:40:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:20:35.040 12:40:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:20:35.040 12:40:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # echo 0 00:20:35.040 12:40:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:35.040 12:40:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:35.040 12:40:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:35.040 12:40:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:35.040 12:40:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:20:35.040 12:40:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:20:35.299 12:40:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@722 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:35.868 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:35.868 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:35.868 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:36.127 00:20:36.127 real 0m3.257s 00:20:36.127 user 0m1.133s 00:20:36.127 sys 0m1.512s 00:20:36.127 12:40:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:36.127 12:40:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.127 ************************************ 00:20:36.127 END TEST nvmf_identify_kernel_target 00:20:36.127 ************************************ 00:20:36.127 12:40:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:20:36.127 12:40:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:36.127 12:40:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:36.127 12:40:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.127 ************************************ 00:20:36.127 START TEST nvmf_auth_host 00:20:36.127 ************************************ 00:20:36.127 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:20:36.127 * Looking for test storage... 00:20:36.127 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:36.127 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:36.127 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lcov --version 00:20:36.127 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:36.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.388 --rc genhtml_branch_coverage=1 00:20:36.388 --rc genhtml_function_coverage=1 00:20:36.388 --rc genhtml_legend=1 00:20:36.388 --rc geninfo_all_blocks=1 00:20:36.388 --rc geninfo_unexecuted_blocks=1 00:20:36.388 00:20:36.388 ' 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:36.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.388 --rc genhtml_branch_coverage=1 00:20:36.388 --rc genhtml_function_coverage=1 00:20:36.388 --rc genhtml_legend=1 00:20:36.388 --rc geninfo_all_blocks=1 00:20:36.388 --rc geninfo_unexecuted_blocks=1 00:20:36.388 00:20:36.388 ' 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:36.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.388 --rc genhtml_branch_coverage=1 00:20:36.388 --rc genhtml_function_coverage=1 00:20:36.388 --rc genhtml_legend=1 00:20:36.388 --rc geninfo_all_blocks=1 00:20:36.388 --rc geninfo_unexecuted_blocks=1 00:20:36.388 00:20:36.388 ' 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:36.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.388 --rc genhtml_branch_coverage=1 00:20:36.388 --rc genhtml_function_coverage=1 00:20:36.388 --rc genhtml_legend=1 00:20:36.388 --rc geninfo_all_blocks=1 00:20:36.388 --rc geninfo_unexecuted_blocks=1 00:20:36.388 00:20:36.388 ' 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.388 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:36.389 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@456 -- # nvmf_veth_init 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:36.389 Cannot find device "nvmf_init_br" 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:36.389 Cannot find device "nvmf_init_br2" 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:36.389 Cannot find device "nvmf_tgt_br" 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:36.389 Cannot find device "nvmf_tgt_br2" 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:36.389 Cannot find device "nvmf_init_br" 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:36.389 Cannot find device "nvmf_init_br2" 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:36.389 Cannot find device "nvmf_tgt_br" 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:36.389 Cannot find device "nvmf_tgt_br2" 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:36.389 Cannot find device "nvmf_br" 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:36.389 Cannot find device "nvmf_init_if" 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:36.389 Cannot find device "nvmf_init_if2" 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:36.389 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:36.389 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:36.389 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:36.649 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:36.649 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:36.649 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:36.649 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:36.649 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:36.649 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:36.649 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:36.649 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:36.649 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:36.649 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:36.649 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:36.649 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:36.649 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:36.650 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:36.650 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:36.650 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:36.650 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:36.650 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:36.650 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:36.650 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:36.650 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:36.650 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:36.650 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:36.650 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:36.650 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:36.650 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:36.650 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:36.650 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:36.650 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:36.650 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:20:36.650 00:20:36.650 --- 10.0.0.3 ping statistics --- 00:20:36.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.650 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:20:36.650 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:36.650 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:36.650 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:20:36.650 00:20:36.650 --- 10.0.0.4 ping statistics --- 00:20:36.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.650 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:20:36.650 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:36.650 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:36.650 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:20:36.650 00:20:36.650 --- 10.0.0.1 ping statistics --- 00:20:36.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.650 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:20:36.650 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:36.650 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:36.650 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:20:36.650 00:20:36.650 --- 10.0.0.2 ping statistics --- 00:20:36.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.650 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:20:36.650 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:36.650 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@457 -- # return 0 00:20:36.650 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:20:36.650 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:36.650 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:20:36.650 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:20:36.650 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:36.650 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:20:36.650 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:20:36.650 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:20:36.650 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:20:36.650 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:36.650 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.650 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # nvmfpid=93677 00:20:36.650 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:20:36.650 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # waitforlisten 93677 00:20:36.650 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 93677 ']' 00:20:36.650 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.650 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:36.650 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.650 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:36.650 12:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.219 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:37.219 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=d5bd7cdd286973eadd8e40937740871f 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.0Rw 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key d5bd7cdd286973eadd8e40937740871f 0 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 d5bd7cdd286973eadd8e40937740871f 0 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=d5bd7cdd286973eadd8e40937740871f 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.0Rw 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.0Rw 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.0Rw 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=28e5a6cadd81ba99fdcb1d6071bad7d4ee4b30cd612654ed2bc2f9ffe41530e6 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.pDy 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 28e5a6cadd81ba99fdcb1d6071bad7d4ee4b30cd612654ed2bc2f9ffe41530e6 3 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 28e5a6cadd81ba99fdcb1d6071bad7d4ee4b30cd612654ed2bc2f9ffe41530e6 3 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=28e5a6cadd81ba99fdcb1d6071bad7d4ee4b30cd612654ed2bc2f9ffe41530e6 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.pDy 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.pDy 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.pDy 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=7e9b6d8f91f9891bbce8689abd11b45ec0f9e3cfdc3bcbd2 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.p5h 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 7e9b6d8f91f9891bbce8689abd11b45ec0f9e3cfdc3bcbd2 0 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 7e9b6d8f91f9891bbce8689abd11b45ec0f9e3cfdc3bcbd2 0 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=7e9b6d8f91f9891bbce8689abd11b45ec0f9e3cfdc3bcbd2 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.p5h 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.p5h 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.p5h 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=6e0957729a46d5f5aa1dd013ec435df51791b203a7cfad56 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.7Rc 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 6e0957729a46d5f5aa1dd013ec435df51791b203a7cfad56 2 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 6e0957729a46d5f5aa1dd013ec435df51791b203a7cfad56 2 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=6e0957729a46d5f5aa1dd013ec435df51791b203a7cfad56 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:20:37.220 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:37.480 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.7Rc 00:20:37.480 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.7Rc 00:20:37.480 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.7Rc 00:20:37.480 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:20:37.480 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:37.480 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:37.480 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:37.480 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:20:37.480 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:20:37.480 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:37.480 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=512b38eb705fdaa7230b0789fdac9db2 00:20:37.480 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:20:37.480 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.Wxq 00:20:37.480 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 512b38eb705fdaa7230b0789fdac9db2 1 00:20:37.480 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 512b38eb705fdaa7230b0789fdac9db2 1 00:20:37.480 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:37.480 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:37.480 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=512b38eb705fdaa7230b0789fdac9db2 00:20:37.480 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:20:37.480 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:37.480 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.Wxq 00:20:37.480 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.Wxq 00:20:37.480 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Wxq 00:20:37.480 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:20:37.480 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:37.480 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:37.480 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:37.480 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:20:37.480 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:20:37.480 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:37.480 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=ff816270274cc6d7feeaf998b1541f14 00:20:37.480 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:20:37.480 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.Z00 00:20:37.480 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key ff816270274cc6d7feeaf998b1541f14 1 00:20:37.480 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 ff816270274cc6d7feeaf998b1541f14 1 00:20:37.480 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:37.480 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:37.480 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=ff816270274cc6d7feeaf998b1541f14 00:20:37.480 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:20:37.480 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:37.481 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.Z00 00:20:37.481 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.Z00 00:20:37.481 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Z00 00:20:37.481 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:20:37.481 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:37.481 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:37.481 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:37.481 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:20:37.481 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:20:37.481 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:37.481 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=6667ec82db5717730117bd217348aa96edb7738d8053dd88 00:20:37.481 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:20:37.481 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.3Ly 00:20:37.481 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 6667ec82db5717730117bd217348aa96edb7738d8053dd88 2 00:20:37.481 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 6667ec82db5717730117bd217348aa96edb7738d8053dd88 2 00:20:37.481 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:37.481 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:37.481 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=6667ec82db5717730117bd217348aa96edb7738d8053dd88 00:20:37.481 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:20:37.481 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:37.481 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.3Ly 00:20:37.481 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.3Ly 00:20:37.481 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.3Ly 00:20:37.481 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:20:37.481 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:37.481 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:37.481 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:37.481 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:20:37.481 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:20:37.481 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:37.481 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=ce9b3a8bed8f438839981534078082c8 00:20:37.766 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:20:37.766 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.NJA 00:20:37.766 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key ce9b3a8bed8f438839981534078082c8 0 00:20:37.766 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 ce9b3a8bed8f438839981534078082c8 0 00:20:37.766 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:37.766 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:37.766 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=ce9b3a8bed8f438839981534078082c8 00:20:37.766 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:20:37.766 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:37.766 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.NJA 00:20:37.766 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.NJA 00:20:37.766 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.NJA 00:20:37.766 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:20:37.766 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:37.766 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:37.766 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:37.766 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:20:37.766 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:20:37.766 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:37.766 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=07c2a40dbed040db82ae778ed18c758aad796ab651b4f456b3f239c24fdee9b6 00:20:37.766 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:20:37.766 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.Ul9 00:20:37.766 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 07c2a40dbed040db82ae778ed18c758aad796ab651b4f456b3f239c24fdee9b6 3 00:20:37.766 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 07c2a40dbed040db82ae778ed18c758aad796ab651b4f456b3f239c24fdee9b6 3 00:20:37.766 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:37.766 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:37.766 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=07c2a40dbed040db82ae778ed18c758aad796ab651b4f456b3f239c24fdee9b6 00:20:37.766 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:20:37.766 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:37.766 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.Ul9 00:20:37.766 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.Ul9 00:20:37.766 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Ul9 00:20:37.766 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:20:37.766 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 93677 00:20:37.766 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 93677 ']' 00:20:37.766 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.766 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:37.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:37.766 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.766 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:37.766 12:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.029 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:38.029 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:20:38.029 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:38.029 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.0Rw 00:20:38.029 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.029 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.029 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.029 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.pDy ]] 00:20:38.029 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.pDy 00:20:38.029 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.029 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.029 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.029 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:38.029 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.p5h 00:20:38.029 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.029 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.029 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.029 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.7Rc ]] 00:20:38.029 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.7Rc 00:20:38.029 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Wxq 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Z00 ]] 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Z00 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.3Ly 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.NJA ]] 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.NJA 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Ul9 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # local block nvme 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@666 -- # modprobe nvmet 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:38.030 12:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:38.599 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:38.599 Waiting for block devices as requested 00:20:38.599 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:38.599 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:39.167 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:20:39.167 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:39.167 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:20:39.167 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:20:39.167 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:39.167 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:39.167 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:20:39.167 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:20:39.167 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:39.427 No valid GPT data, bailing 00:20:39.427 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:39.427 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:39.427 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:39.427 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:20:39.427 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:20:39.427 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:39.427 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n2 00:20:39.427 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:20:39.427 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:39.427 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:39.427 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n2 00:20:39.427 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:20:39.427 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:39.427 No valid GPT data, bailing 00:20:39.427 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:39.427 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:39.427 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:39.427 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n2 00:20:39.427 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:20:39.427 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:39.427 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n3 00:20:39.427 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:20:39.427 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:39.427 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:39.427 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n3 00:20:39.427 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:20:39.427 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:39.427 No valid GPT data, bailing 00:20:39.427 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:39.427 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:39.427 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:39.427 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n3 00:20:39.427 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:20:39.427 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:39.427 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme1n1 00:20:39.427 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:20:39.427 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:39.427 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:39.427 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme1n1 00:20:39.427 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:20:39.427 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:39.427 No valid GPT data, bailing 00:20:39.427 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:39.427 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:39.427 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:39.427 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme1n1 00:20:39.427 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # [[ -b /dev/nvme1n1 ]] 00:20:39.427 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:39.687 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:39.687 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:39.687 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:20:39.687 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo 1 00:20:39.687 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@692 -- # echo /dev/nvme1n1 00:20:39.687 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:20:39.687 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:20:39.687 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo tcp 00:20:39.687 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 4420 00:20:39.687 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo ipv4 00:20:39.687 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:39.687 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid=bae1b18f-cc14-461e-aa63-e888be1a2cc9 -a 10.0.0.1 -t tcp -s 4420 00:20:39.687 00:20:39.687 Discovery Log Number of Records 2, Generation counter 2 00:20:39.687 =====Discovery Log Entry 0====== 00:20:39.687 trtype: tcp 00:20:39.687 adrfam: ipv4 00:20:39.687 subtype: current discovery subsystem 00:20:39.687 treq: not specified, sq flow control disable supported 00:20:39.687 portid: 1 00:20:39.687 trsvcid: 4420 00:20:39.687 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:39.687 traddr: 10.0.0.1 00:20:39.687 eflags: none 00:20:39.687 sectype: none 00:20:39.687 =====Discovery Log Entry 1====== 00:20:39.687 trtype: tcp 00:20:39.687 adrfam: ipv4 00:20:39.687 subtype: nvme subsystem 00:20:39.687 treq: not specified, sq flow control disable supported 00:20:39.687 portid: 1 00:20:39.687 trsvcid: 4420 00:20:39.687 subnqn: nqn.2024-02.io.spdk:cnode0 00:20:39.687 traddr: 10.0.0.1 00:20:39.687 eflags: none 00:20:39.687 sectype: none 00:20:39.687 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:39.687 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:20:39.687 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:39.687 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:39.687 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:39.687 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:39.687 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:39.687 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:39.687 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2U5YjZkOGY5MWY5ODkxYmJjZTg2ODlhYmQxMWI0NWVjMGY5ZTNjZmRjM2JjYmQyV2h02A==: 00:20:39.687 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: 00:20:39.687 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:39.687 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:39.687 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2U5YjZkOGY5MWY5ODkxYmJjZTg2ODlhYmQxMWI0NWVjMGY5ZTNjZmRjM2JjYmQyV2h02A==: 00:20:39.687 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: ]] 00:20:39.687 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: 00:20:39.687 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:20:39.687 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:20:39.687 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:20:39.687 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:39.687 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:20:39.687 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:39.687 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:20:39.687 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:39.687 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:39.688 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:39.688 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:39.688 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.688 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.688 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.688 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:39.688 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:39.688 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:39.688 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:39.688 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:39.688 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:39.688 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:39.688 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:39.688 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:39.688 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:39.688 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:39.688 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.688 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.688 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.947 nvme0n1 00:20:39.947 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.947 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:39.947 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.947 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.947 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:39.947 12:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDViZDdjZGQyODY5NzNlYWRkOGU0MDkzNzc0MDg3MWaQcc40: 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjhlNWE2Y2FkZDgxYmE5OWZkY2IxZDYwNzFiYWQ3ZDRlZTRiMzBjZDYxMjY1NGVkMmJjMmY5ZmZlNDE1MzBlNpozreI=: 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDViZDdjZGQyODY5NzNlYWRkOGU0MDkzNzc0MDg3MWaQcc40: 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjhlNWE2Y2FkZDgxYmE5OWZkY2IxZDYwNzFiYWQ3ZDRlZTRiMzBjZDYxMjY1NGVkMmJjMmY5ZmZlNDE1MzBlNpozreI=: ]] 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjhlNWE2Y2FkZDgxYmE5OWZkY2IxZDYwNzFiYWQ3ZDRlZTRiMzBjZDYxMjY1NGVkMmJjMmY5ZmZlNDE1MzBlNpozreI=: 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.947 nvme0n1 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.947 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2U5YjZkOGY5MWY5ODkxYmJjZTg2ODlhYmQxMWI0NWVjMGY5ZTNjZmRjM2JjYmQyV2h02A==: 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2U5YjZkOGY5MWY5ODkxYmJjZTg2ODlhYmQxMWI0NWVjMGY5ZTNjZmRjM2JjYmQyV2h02A==: 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: ]] 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.208 nvme0n1 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTEyYjM4ZWI3MDVmZGFhNzIzMGIwNzg5ZmRhYzlkYjJszgfv: 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTEyYjM4ZWI3MDVmZGFhNzIzMGIwNzg5ZmRhYzlkYjJszgfv: 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: ]] 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:40.208 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:40.209 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:40.209 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:40.209 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:40.209 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:40.209 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:40.209 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:40.209 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:40.209 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:40.209 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:40.209 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.209 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.209 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.468 nvme0n1 00:20:40.468 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.468 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:40.468 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:40.468 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.468 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.468 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.468 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.468 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:40.469 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.469 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.469 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.469 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:40.469 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:20:40.469 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:40.469 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:40.469 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:40.469 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:40.469 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjY2N2VjODJkYjU3MTc3MzAxMTdiZDIxNzM0OGFhOTZlZGI3NzM4ZDgwNTNkZDg4uvxoig==: 00:20:40.469 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2U5YjNhOGJlZDhmNDM4ODM5OTgxNTM0MDc4MDgyYzhVFeGP: 00:20:40.469 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:40.469 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:40.469 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjY2N2VjODJkYjU3MTc3MzAxMTdiZDIxNzM0OGFhOTZlZGI3NzM4ZDgwNTNkZDg4uvxoig==: 00:20:40.469 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2U5YjNhOGJlZDhmNDM4ODM5OTgxNTM0MDc4MDgyYzhVFeGP: ]] 00:20:40.469 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2U5YjNhOGJlZDhmNDM4ODM5OTgxNTM0MDc4MDgyYzhVFeGP: 00:20:40.469 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:20:40.469 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:40.469 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:40.469 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:40.469 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:40.469 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:40.469 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:40.469 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.469 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.469 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.469 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:40.469 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:40.469 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:40.469 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:40.469 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:40.469 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:40.469 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:40.469 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:40.469 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:40.469 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:40.469 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:40.469 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:40.469 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.469 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.469 nvme0n1 00:20:40.469 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.469 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:40.469 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:40.469 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.469 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.469 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDdjMmE0MGRiZWQwNDBkYjgyYWU3NzhlZDE4Yzc1OGFhZDc5NmFiNjUxYjRmNDU2YjNmMjM5YzI0ZmRlZTliNhiqLsQ=: 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDdjMmE0MGRiZWQwNDBkYjgyYWU3NzhlZDE4Yzc1OGFhZDc5NmFiNjUxYjRmNDU2YjNmMjM5YzI0ZmRlZTliNhiqLsQ=: 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.729 nvme0n1 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDViZDdjZGQyODY5NzNlYWRkOGU0MDkzNzc0MDg3MWaQcc40: 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjhlNWE2Y2FkZDgxYmE5OWZkY2IxZDYwNzFiYWQ3ZDRlZTRiMzBjZDYxMjY1NGVkMmJjMmY5ZmZlNDE1MzBlNpozreI=: 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:40.729 12:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:40.988 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDViZDdjZGQyODY5NzNlYWRkOGU0MDkzNzc0MDg3MWaQcc40: 00:20:40.988 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjhlNWE2Y2FkZDgxYmE5OWZkY2IxZDYwNzFiYWQ3ZDRlZTRiMzBjZDYxMjY1NGVkMmJjMmY5ZmZlNDE1MzBlNpozreI=: ]] 00:20:40.988 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjhlNWE2Y2FkZDgxYmE5OWZkY2IxZDYwNzFiYWQ3ZDRlZTRiMzBjZDYxMjY1NGVkMmJjMmY5ZmZlNDE1MzBlNpozreI=: 00:20:40.988 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:20:40.988 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:40.988 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:40.989 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:40.989 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:40.989 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:40.989 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:40.989 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.989 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.248 nvme0n1 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2U5YjZkOGY5MWY5ODkxYmJjZTg2ODlhYmQxMWI0NWVjMGY5ZTNjZmRjM2JjYmQyV2h02A==: 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2U5YjZkOGY5MWY5ODkxYmJjZTg2ODlhYmQxMWI0NWVjMGY5ZTNjZmRjM2JjYmQyV2h02A==: 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: ]] 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:41.248 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:41.249 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:41.249 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:41.249 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:41.249 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:41.249 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.249 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.249 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.508 nvme0n1 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTEyYjM4ZWI3MDVmZGFhNzIzMGIwNzg5ZmRhYzlkYjJszgfv: 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTEyYjM4ZWI3MDVmZGFhNzIzMGIwNzg5ZmRhYzlkYjJszgfv: 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: ]] 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.508 nvme0n1 00:20:41.508 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.768 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:41.768 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:41.768 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjY2N2VjODJkYjU3MTc3MzAxMTdiZDIxNzM0OGFhOTZlZGI3NzM4ZDgwNTNkZDg4uvxoig==: 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2U5YjNhOGJlZDhmNDM4ODM5OTgxNTM0MDc4MDgyYzhVFeGP: 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjY2N2VjODJkYjU3MTc3MzAxMTdiZDIxNzM0OGFhOTZlZGI3NzM4ZDgwNTNkZDg4uvxoig==: 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2U5YjNhOGJlZDhmNDM4ODM5OTgxNTM0MDc4MDgyYzhVFeGP: ]] 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2U5YjNhOGJlZDhmNDM4ODM5OTgxNTM0MDc4MDgyYzhVFeGP: 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.769 nvme0n1 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.769 12:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.769 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.769 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:41.769 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:20:41.769 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:41.769 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:41.769 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:41.769 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:41.769 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDdjMmE0MGRiZWQwNDBkYjgyYWU3NzhlZDE4Yzc1OGFhZDc5NmFiNjUxYjRmNDU2YjNmMjM5YzI0ZmRlZTliNhiqLsQ=: 00:20:41.769 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:41.769 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:41.769 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:41.769 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDdjMmE0MGRiZWQwNDBkYjgyYWU3NzhlZDE4Yzc1OGFhZDc5NmFiNjUxYjRmNDU2YjNmMjM5YzI0ZmRlZTliNhiqLsQ=: 00:20:41.769 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:41.769 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:20:41.769 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:41.769 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:41.769 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:41.769 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:41.769 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:41.769 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:41.769 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.769 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.029 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.029 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:42.029 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:42.029 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:42.029 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:42.029 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:42.029 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:42.029 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:42.029 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:42.029 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:42.029 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:42.029 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:42.029 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:42.029 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.029 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.029 nvme0n1 00:20:42.029 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.029 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:42.029 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:42.029 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.029 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.029 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.029 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.029 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:42.029 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.029 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.029 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.029 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:42.029 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:42.029 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:20:42.029 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:42.029 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:42.029 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:42.029 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:42.029 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDViZDdjZGQyODY5NzNlYWRkOGU0MDkzNzc0MDg3MWaQcc40: 00:20:42.029 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjhlNWE2Y2FkZDgxYmE5OWZkY2IxZDYwNzFiYWQ3ZDRlZTRiMzBjZDYxMjY1NGVkMmJjMmY5ZmZlNDE1MzBlNpozreI=: 00:20:42.029 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:42.029 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:42.597 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDViZDdjZGQyODY5NzNlYWRkOGU0MDkzNzc0MDg3MWaQcc40: 00:20:42.597 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjhlNWE2Y2FkZDgxYmE5OWZkY2IxZDYwNzFiYWQ3ZDRlZTRiMzBjZDYxMjY1NGVkMmJjMmY5ZmZlNDE1MzBlNpozreI=: ]] 00:20:42.597 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjhlNWE2Y2FkZDgxYmE5OWZkY2IxZDYwNzFiYWQ3ZDRlZTRiMzBjZDYxMjY1NGVkMmJjMmY5ZmZlNDE1MzBlNpozreI=: 00:20:42.597 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:20:42.597 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:42.597 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:42.597 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:42.597 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:42.597 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:42.597 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:42.597 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.597 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.597 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.597 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:42.597 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:42.597 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:42.597 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:42.597 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:42.597 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:42.597 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:42.597 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:42.597 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:42.597 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:42.597 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:42.597 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.597 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.597 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.857 nvme0n1 00:20:42.857 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.857 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:42.857 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:42.857 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.857 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.857 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.857 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.857 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:42.857 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.857 12:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.857 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.857 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:42.857 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:20:42.857 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:42.857 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:42.857 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:42.857 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:42.857 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2U5YjZkOGY5MWY5ODkxYmJjZTg2ODlhYmQxMWI0NWVjMGY5ZTNjZmRjM2JjYmQyV2h02A==: 00:20:42.857 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: 00:20:42.857 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:42.857 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:42.857 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2U5YjZkOGY5MWY5ODkxYmJjZTg2ODlhYmQxMWI0NWVjMGY5ZTNjZmRjM2JjYmQyV2h02A==: 00:20:42.857 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: ]] 00:20:42.857 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: 00:20:42.857 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:20:42.857 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:42.857 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:42.857 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:42.857 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:42.857 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:42.857 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:42.857 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.857 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.857 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.857 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:42.857 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:42.857 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:42.857 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:42.857 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:42.857 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:42.857 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:42.857 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:42.857 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:42.857 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:42.857 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:42.857 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.857 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.857 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.117 nvme0n1 00:20:43.117 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.117 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:43.117 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:43.117 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.117 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.117 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.117 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.117 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:43.117 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.117 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.117 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.117 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:43.117 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:20:43.117 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:43.117 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:43.117 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:43.117 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:43.117 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTEyYjM4ZWI3MDVmZGFhNzIzMGIwNzg5ZmRhYzlkYjJszgfv: 00:20:43.117 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: 00:20:43.117 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:43.117 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:43.117 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTEyYjM4ZWI3MDVmZGFhNzIzMGIwNzg5ZmRhYzlkYjJszgfv: 00:20:43.117 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: ]] 00:20:43.117 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: 00:20:43.117 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:20:43.117 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:43.117 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:43.117 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:43.117 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:43.117 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:43.117 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:43.117 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.117 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.117 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.117 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:43.117 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:43.117 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:43.117 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:43.117 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:43.117 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:43.117 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:43.117 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:43.117 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:43.117 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:43.117 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:43.117 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.117 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.117 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.377 nvme0n1 00:20:43.377 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.377 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:43.377 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.377 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:43.377 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.377 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.377 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.377 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:43.377 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.377 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.377 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.377 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:43.377 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:20:43.377 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:43.377 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:43.377 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:43.377 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:43.377 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjY2N2VjODJkYjU3MTc3MzAxMTdiZDIxNzM0OGFhOTZlZGI3NzM4ZDgwNTNkZDg4uvxoig==: 00:20:43.377 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2U5YjNhOGJlZDhmNDM4ODM5OTgxNTM0MDc4MDgyYzhVFeGP: 00:20:43.377 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:43.377 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:43.377 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjY2N2VjODJkYjU3MTc3MzAxMTdiZDIxNzM0OGFhOTZlZGI3NzM4ZDgwNTNkZDg4uvxoig==: 00:20:43.377 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2U5YjNhOGJlZDhmNDM4ODM5OTgxNTM0MDc4MDgyYzhVFeGP: ]] 00:20:43.377 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2U5YjNhOGJlZDhmNDM4ODM5OTgxNTM0MDc4MDgyYzhVFeGP: 00:20:43.377 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:20:43.377 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:43.377 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:43.377 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:43.377 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:43.377 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:43.377 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:43.377 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.377 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.377 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.377 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:43.377 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:43.377 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:43.377 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:43.377 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:43.377 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:43.377 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:43.377 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:43.377 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:43.377 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:43.377 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:43.377 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:43.377 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.377 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.636 nvme0n1 00:20:43.636 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.636 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:43.636 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.636 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:43.636 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.636 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.636 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.636 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:43.636 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.636 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.636 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.636 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:43.636 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:20:43.636 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:43.636 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:43.636 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:43.636 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:43.636 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDdjMmE0MGRiZWQwNDBkYjgyYWU3NzhlZDE4Yzc1OGFhZDc5NmFiNjUxYjRmNDU2YjNmMjM5YzI0ZmRlZTliNhiqLsQ=: 00:20:43.636 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:43.636 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:43.636 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:43.637 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDdjMmE0MGRiZWQwNDBkYjgyYWU3NzhlZDE4Yzc1OGFhZDc5NmFiNjUxYjRmNDU2YjNmMjM5YzI0ZmRlZTliNhiqLsQ=: 00:20:43.637 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:43.637 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:20:43.637 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:43.637 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:43.637 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:43.637 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:43.637 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:43.637 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:43.637 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.637 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.637 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.637 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:43.637 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:43.637 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:43.637 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:43.637 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:43.637 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:43.637 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:43.637 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:43.637 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:43.637 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:43.637 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:43.637 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:43.637 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.637 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.896 nvme0n1 00:20:43.896 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.896 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:43.896 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.896 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:43.896 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.896 12:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.896 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.896 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:43.896 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.896 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.896 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.896 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:43.896 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:43.896 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:20:43.896 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:43.896 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:43.896 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:43.896 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:43.896 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDViZDdjZGQyODY5NzNlYWRkOGU0MDkzNzc0MDg3MWaQcc40: 00:20:43.896 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjhlNWE2Y2FkZDgxYmE5OWZkY2IxZDYwNzFiYWQ3ZDRlZTRiMzBjZDYxMjY1NGVkMmJjMmY5ZmZlNDE1MzBlNpozreI=: 00:20:43.896 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:43.896 12:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:45.798 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDViZDdjZGQyODY5NzNlYWRkOGU0MDkzNzc0MDg3MWaQcc40: 00:20:45.798 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjhlNWE2Y2FkZDgxYmE5OWZkY2IxZDYwNzFiYWQ3ZDRlZTRiMzBjZDYxMjY1NGVkMmJjMmY5ZmZlNDE1MzBlNpozreI=: ]] 00:20:45.798 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjhlNWE2Y2FkZDgxYmE5OWZkY2IxZDYwNzFiYWQ3ZDRlZTRiMzBjZDYxMjY1NGVkMmJjMmY5ZmZlNDE1MzBlNpozreI=: 00:20:45.798 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:20:45.798 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:45.798 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:45.798 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:45.798 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:45.798 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:45.798 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:45.798 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.798 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.798 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.798 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:45.798 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:45.798 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:45.798 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:45.798 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.798 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.798 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:45.798 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:45.798 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:45.798 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:45.798 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:45.798 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.798 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.798 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.798 nvme0n1 00:20:45.798 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.798 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:45.798 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.798 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:45.798 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.799 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.799 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.799 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:45.799 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.799 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.799 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.799 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:45.799 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:20:45.799 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:45.799 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:45.799 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:45.799 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:45.799 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2U5YjZkOGY5MWY5ODkxYmJjZTg2ODlhYmQxMWI0NWVjMGY5ZTNjZmRjM2JjYmQyV2h02A==: 00:20:45.799 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: 00:20:45.799 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:45.799 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:45.799 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2U5YjZkOGY5MWY5ODkxYmJjZTg2ODlhYmQxMWI0NWVjMGY5ZTNjZmRjM2JjYmQyV2h02A==: 00:20:45.799 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: ]] 00:20:45.799 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: 00:20:45.799 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:20:45.799 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:45.799 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:45.799 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:45.799 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:45.799 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:45.799 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:45.799 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.799 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.799 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.799 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:45.799 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:45.799 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:45.799 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:45.799 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.799 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.799 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:45.799 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:45.799 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:45.799 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:45.799 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:45.799 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.799 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.799 12:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.058 nvme0n1 00:20:46.058 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.058 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:46.058 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:46.058 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.058 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.058 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.316 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.316 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:46.316 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.316 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.316 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.316 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:46.316 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:20:46.316 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:46.316 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:46.316 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:46.317 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:46.317 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTEyYjM4ZWI3MDVmZGFhNzIzMGIwNzg5ZmRhYzlkYjJszgfv: 00:20:46.317 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: 00:20:46.317 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:46.317 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:46.317 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTEyYjM4ZWI3MDVmZGFhNzIzMGIwNzg5ZmRhYzlkYjJszgfv: 00:20:46.317 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: ]] 00:20:46.317 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: 00:20:46.317 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:20:46.317 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:46.317 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:46.317 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:46.317 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:46.317 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:46.317 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:46.317 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.317 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.317 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.317 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:46.317 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:46.317 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:46.317 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:46.317 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:46.317 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:46.317 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:46.317 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:46.317 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:46.317 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:46.317 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:46.317 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.317 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.317 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.576 nvme0n1 00:20:46.576 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.576 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:46.576 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:46.576 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.576 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.576 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.576 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.576 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:46.576 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.576 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.576 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.576 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:46.576 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:20:46.576 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:46.576 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:46.576 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:46.576 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:46.576 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjY2N2VjODJkYjU3MTc3MzAxMTdiZDIxNzM0OGFhOTZlZGI3NzM4ZDgwNTNkZDg4uvxoig==: 00:20:46.576 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2U5YjNhOGJlZDhmNDM4ODM5OTgxNTM0MDc4MDgyYzhVFeGP: 00:20:46.576 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:46.576 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:46.576 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjY2N2VjODJkYjU3MTc3MzAxMTdiZDIxNzM0OGFhOTZlZGI3NzM4ZDgwNTNkZDg4uvxoig==: 00:20:46.576 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2U5YjNhOGJlZDhmNDM4ODM5OTgxNTM0MDc4MDgyYzhVFeGP: ]] 00:20:46.576 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2U5YjNhOGJlZDhmNDM4ODM5OTgxNTM0MDc4MDgyYzhVFeGP: 00:20:46.576 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:20:46.576 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:46.576 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:46.576 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:46.576 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:46.576 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:46.576 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:46.576 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.576 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.576 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.576 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:46.576 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:46.576 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:46.576 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:46.576 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:46.576 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:46.576 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:46.576 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:46.576 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:46.576 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:46.576 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:46.576 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:46.576 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.576 12:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.835 nvme0n1 00:20:46.835 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.835 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:46.835 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.835 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.835 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:46.835 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.095 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.095 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.095 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.095 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.095 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.095 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:47.095 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:20:47.095 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.095 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:47.095 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:47.095 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:47.095 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDdjMmE0MGRiZWQwNDBkYjgyYWU3NzhlZDE4Yzc1OGFhZDc5NmFiNjUxYjRmNDU2YjNmMjM5YzI0ZmRlZTliNhiqLsQ=: 00:20:47.095 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:47.095 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:47.095 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:47.095 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDdjMmE0MGRiZWQwNDBkYjgyYWU3NzhlZDE4Yzc1OGFhZDc5NmFiNjUxYjRmNDU2YjNmMjM5YzI0ZmRlZTliNhiqLsQ=: 00:20:47.095 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:47.095 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:20:47.095 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:47.095 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:47.095 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:47.095 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:47.095 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:47.095 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:47.095 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.095 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.095 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.095 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:47.095 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:47.095 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:47.095 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:47.095 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.095 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.095 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:47.095 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.095 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:47.095 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:47.095 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:47.095 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:47.095 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.095 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.354 nvme0n1 00:20:47.354 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.354 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.354 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.354 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.354 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:47.354 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.354 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.354 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.354 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.354 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.354 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.354 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:47.354 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:47.354 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:20:47.354 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.355 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:47.355 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:47.355 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:47.355 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDViZDdjZGQyODY5NzNlYWRkOGU0MDkzNzc0MDg3MWaQcc40: 00:20:47.355 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjhlNWE2Y2FkZDgxYmE5OWZkY2IxZDYwNzFiYWQ3ZDRlZTRiMzBjZDYxMjY1NGVkMmJjMmY5ZmZlNDE1MzBlNpozreI=: 00:20:47.355 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:47.355 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:47.355 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDViZDdjZGQyODY5NzNlYWRkOGU0MDkzNzc0MDg3MWaQcc40: 00:20:47.355 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjhlNWE2Y2FkZDgxYmE5OWZkY2IxZDYwNzFiYWQ3ZDRlZTRiMzBjZDYxMjY1NGVkMmJjMmY5ZmZlNDE1MzBlNpozreI=: ]] 00:20:47.355 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjhlNWE2Y2FkZDgxYmE5OWZkY2IxZDYwNzFiYWQ3ZDRlZTRiMzBjZDYxMjY1NGVkMmJjMmY5ZmZlNDE1MzBlNpozreI=: 00:20:47.355 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:20:47.355 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:47.355 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:47.355 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:47.355 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:47.355 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:47.355 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:47.355 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.355 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.355 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.355 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:47.355 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:47.355 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:47.355 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:47.355 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.355 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.355 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:47.355 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.355 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:47.355 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:47.355 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:47.355 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.355 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.355 12:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.923 nvme0n1 00:20:47.923 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.923 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.923 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:47.923 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.924 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.924 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.924 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.924 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.924 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.924 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.924 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.924 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:47.924 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:20:47.924 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.924 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:47.924 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:47.924 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:47.924 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2U5YjZkOGY5MWY5ODkxYmJjZTg2ODlhYmQxMWI0NWVjMGY5ZTNjZmRjM2JjYmQyV2h02A==: 00:20:47.924 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: 00:20:47.924 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:47.924 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:47.924 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2U5YjZkOGY5MWY5ODkxYmJjZTg2ODlhYmQxMWI0NWVjMGY5ZTNjZmRjM2JjYmQyV2h02A==: 00:20:47.924 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: ]] 00:20:47.924 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: 00:20:47.924 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:20:47.924 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:47.924 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:47.924 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:47.924 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:47.924 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:47.924 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:47.924 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.924 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.924 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.924 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:47.924 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:47.924 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:47.924 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:47.924 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.924 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.924 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:47.924 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.924 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:47.924 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:47.924 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:47.924 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.924 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.924 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.492 nvme0n1 00:20:48.492 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.492 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:48.492 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.492 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.492 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:48.492 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.492 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.492 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:48.492 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.492 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.492 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.492 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:48.492 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:20:48.492 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:48.492 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:48.492 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:48.492 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:48.492 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTEyYjM4ZWI3MDVmZGFhNzIzMGIwNzg5ZmRhYzlkYjJszgfv: 00:20:48.492 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: 00:20:48.492 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:48.492 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:48.492 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTEyYjM4ZWI3MDVmZGFhNzIzMGIwNzg5ZmRhYzlkYjJszgfv: 00:20:48.492 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: ]] 00:20:48.492 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: 00:20:48.492 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:20:48.493 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:48.493 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:48.493 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:48.493 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:48.493 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:48.493 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:48.493 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.493 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.493 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.493 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:48.493 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:48.493 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:48.493 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:48.493 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:48.493 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:48.493 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:48.493 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:48.493 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:48.493 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:48.493 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:48.493 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.493 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.493 12:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.060 nvme0n1 00:20:49.060 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.060 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:49.060 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:49.060 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.060 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.060 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.060 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.060 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:49.060 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.060 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.319 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.319 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:49.319 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:20:49.319 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:49.319 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:49.319 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:49.319 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:49.319 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjY2N2VjODJkYjU3MTc3MzAxMTdiZDIxNzM0OGFhOTZlZGI3NzM4ZDgwNTNkZDg4uvxoig==: 00:20:49.319 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2U5YjNhOGJlZDhmNDM4ODM5OTgxNTM0MDc4MDgyYzhVFeGP: 00:20:49.319 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:49.319 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:49.319 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjY2N2VjODJkYjU3MTc3MzAxMTdiZDIxNzM0OGFhOTZlZGI3NzM4ZDgwNTNkZDg4uvxoig==: 00:20:49.319 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2U5YjNhOGJlZDhmNDM4ODM5OTgxNTM0MDc4MDgyYzhVFeGP: ]] 00:20:49.319 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2U5YjNhOGJlZDhmNDM4ODM5OTgxNTM0MDc4MDgyYzhVFeGP: 00:20:49.319 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:20:49.319 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:49.319 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:49.319 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:49.319 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:49.319 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:49.319 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:49.319 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.319 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.319 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.319 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:49.319 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:49.319 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:49.319 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:49.319 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:49.319 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:49.319 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:49.319 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:49.319 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:49.319 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:49.319 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:49.319 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:49.319 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.319 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.884 nvme0n1 00:20:49.884 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.884 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:49.884 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:49.884 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.884 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.884 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.884 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.884 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:49.884 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.884 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.884 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.884 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:49.884 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:20:49.884 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:49.884 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:49.884 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:49.884 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:49.884 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDdjMmE0MGRiZWQwNDBkYjgyYWU3NzhlZDE4Yzc1OGFhZDc5NmFiNjUxYjRmNDU2YjNmMjM5YzI0ZmRlZTliNhiqLsQ=: 00:20:49.884 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:49.884 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:49.884 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:49.885 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDdjMmE0MGRiZWQwNDBkYjgyYWU3NzhlZDE4Yzc1OGFhZDc5NmFiNjUxYjRmNDU2YjNmMjM5YzI0ZmRlZTliNhiqLsQ=: 00:20:49.885 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:49.885 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:20:49.885 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:49.885 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:49.885 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:49.885 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:49.885 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:49.885 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:49.885 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.885 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.885 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.885 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:49.885 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:49.885 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:49.885 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:49.885 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:49.885 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:49.885 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:49.885 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:49.885 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:49.885 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:49.885 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:49.885 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:49.885 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.885 12:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.453 nvme0n1 00:20:50.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:50.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:50.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:50.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:50.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:50.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:50.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:20:50.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:50.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:50.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:50.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:50.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDViZDdjZGQyODY5NzNlYWRkOGU0MDkzNzc0MDg3MWaQcc40: 00:20:50.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjhlNWE2Y2FkZDgxYmE5OWZkY2IxZDYwNzFiYWQ3ZDRlZTRiMzBjZDYxMjY1NGVkMmJjMmY5ZmZlNDE1MzBlNpozreI=: 00:20:50.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:50.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:50.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDViZDdjZGQyODY5NzNlYWRkOGU0MDkzNzc0MDg3MWaQcc40: 00:20:50.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjhlNWE2Y2FkZDgxYmE5OWZkY2IxZDYwNzFiYWQ3ZDRlZTRiMzBjZDYxMjY1NGVkMmJjMmY5ZmZlNDE1MzBlNpozreI=: ]] 00:20:50.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjhlNWE2Y2FkZDgxYmE5OWZkY2IxZDYwNzFiYWQ3ZDRlZTRiMzBjZDYxMjY1NGVkMmJjMmY5ZmZlNDE1MzBlNpozreI=: 00:20:50.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:20:50.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:50.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:50.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:50.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:50.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:50.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:50.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:50.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:50.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:50.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:50.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:50.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:50.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:50.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:50.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:50.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:50.453 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:50.454 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.454 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.454 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.454 nvme0n1 00:20:50.454 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.454 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:50.454 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.454 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.454 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:50.454 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.454 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.454 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:50.454 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.454 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.454 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.454 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:50.454 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:20:50.454 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:50.454 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:50.454 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:50.454 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:50.454 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2U5YjZkOGY5MWY5ODkxYmJjZTg2ODlhYmQxMWI0NWVjMGY5ZTNjZmRjM2JjYmQyV2h02A==: 00:20:50.454 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: 00:20:50.454 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:50.454 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:50.454 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2U5YjZkOGY5MWY5ODkxYmJjZTg2ODlhYmQxMWI0NWVjMGY5ZTNjZmRjM2JjYmQyV2h02A==: 00:20:50.454 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: ]] 00:20:50.454 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: 00:20:50.454 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:20:50.454 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:50.454 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:50.454 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:50.454 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:50.454 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:50.454 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:50.454 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.454 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.454 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.714 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:50.714 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:50.714 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:50.714 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:50.714 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:50.714 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:50.714 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:50.714 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:50.714 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:50.714 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:50.714 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:50.714 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.714 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.714 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.714 nvme0n1 00:20:50.714 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.714 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:50.714 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.714 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.714 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:50.714 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.714 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.714 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:50.714 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.714 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.714 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.714 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:50.714 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:20:50.714 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:50.714 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:50.714 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:50.714 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:50.714 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTEyYjM4ZWI3MDVmZGFhNzIzMGIwNzg5ZmRhYzlkYjJszgfv: 00:20:50.714 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: 00:20:50.714 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:50.714 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:50.714 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTEyYjM4ZWI3MDVmZGFhNzIzMGIwNzg5ZmRhYzlkYjJszgfv: 00:20:50.714 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: ]] 00:20:50.714 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: 00:20:50.714 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:20:50.714 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:50.714 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:50.714 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:50.714 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:50.714 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:50.714 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:50.714 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.715 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.715 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.715 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:50.715 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:50.715 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:50.715 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:50.715 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:50.715 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:50.715 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:50.715 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:50.715 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:50.715 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:50.715 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:50.715 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.715 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.715 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.974 nvme0n1 00:20:50.974 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.974 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:50.974 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:50.974 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.974 12:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.974 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.974 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.974 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:50.974 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.974 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.974 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.974 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:50.974 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:20:50.974 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:50.974 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:50.974 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:50.974 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:50.974 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjY2N2VjODJkYjU3MTc3MzAxMTdiZDIxNzM0OGFhOTZlZGI3NzM4ZDgwNTNkZDg4uvxoig==: 00:20:50.974 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2U5YjNhOGJlZDhmNDM4ODM5OTgxNTM0MDc4MDgyYzhVFeGP: 00:20:50.974 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:50.974 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:50.974 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjY2N2VjODJkYjU3MTc3MzAxMTdiZDIxNzM0OGFhOTZlZGI3NzM4ZDgwNTNkZDg4uvxoig==: 00:20:50.974 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2U5YjNhOGJlZDhmNDM4ODM5OTgxNTM0MDc4MDgyYzhVFeGP: ]] 00:20:50.974 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2U5YjNhOGJlZDhmNDM4ODM5OTgxNTM0MDc4MDgyYzhVFeGP: 00:20:50.974 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:20:50.974 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:50.974 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:50.974 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:50.974 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:50.974 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:50.974 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:50.974 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.974 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.974 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.974 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:50.974 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:50.974 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:50.974 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:50.974 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:50.974 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:50.974 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:50.974 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:50.974 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:50.974 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:50.974 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:50.974 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:50.974 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.974 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.975 nvme0n1 00:20:50.975 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.975 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:50.975 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:50.975 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.975 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.975 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.975 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.975 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:50.975 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.975 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.234 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.234 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDdjMmE0MGRiZWQwNDBkYjgyYWU3NzhlZDE4Yzc1OGFhZDc5NmFiNjUxYjRmNDU2YjNmMjM5YzI0ZmRlZTliNhiqLsQ=: 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDdjMmE0MGRiZWQwNDBkYjgyYWU3NzhlZDE4Yzc1OGFhZDc5NmFiNjUxYjRmNDU2YjNmMjM5YzI0ZmRlZTliNhiqLsQ=: 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.235 nvme0n1 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDViZDdjZGQyODY5NzNlYWRkOGU0MDkzNzc0MDg3MWaQcc40: 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjhlNWE2Y2FkZDgxYmE5OWZkY2IxZDYwNzFiYWQ3ZDRlZTRiMzBjZDYxMjY1NGVkMmJjMmY5ZmZlNDE1MzBlNpozreI=: 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDViZDdjZGQyODY5NzNlYWRkOGU0MDkzNzc0MDg3MWaQcc40: 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjhlNWE2Y2FkZDgxYmE5OWZkY2IxZDYwNzFiYWQ3ZDRlZTRiMzBjZDYxMjY1NGVkMmJjMmY5ZmZlNDE1MzBlNpozreI=: ]] 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjhlNWE2Y2FkZDgxYmE5OWZkY2IxZDYwNzFiYWQ3ZDRlZTRiMzBjZDYxMjY1NGVkMmJjMmY5ZmZlNDE1MzBlNpozreI=: 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.235 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.495 nvme0n1 00:20:51.495 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.495 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:51.495 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:51.495 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.495 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.495 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.495 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.495 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:51.495 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.495 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.495 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.495 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:51.495 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:20:51.495 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:51.495 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:51.495 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:51.495 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:51.495 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2U5YjZkOGY5MWY5ODkxYmJjZTg2ODlhYmQxMWI0NWVjMGY5ZTNjZmRjM2JjYmQyV2h02A==: 00:20:51.495 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: 00:20:51.495 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:51.495 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:51.495 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2U5YjZkOGY5MWY5ODkxYmJjZTg2ODlhYmQxMWI0NWVjMGY5ZTNjZmRjM2JjYmQyV2h02A==: 00:20:51.495 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: ]] 00:20:51.495 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: 00:20:51.495 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:20:51.495 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:51.495 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:51.495 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:51.495 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:51.495 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:51.495 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:51.495 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.495 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.495 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.495 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:51.495 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:51.495 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:51.495 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:51.495 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:51.495 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:51.495 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:51.495 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:51.495 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:51.495 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:51.495 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:51.495 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.495 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.495 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.755 nvme0n1 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTEyYjM4ZWI3MDVmZGFhNzIzMGIwNzg5ZmRhYzlkYjJszgfv: 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTEyYjM4ZWI3MDVmZGFhNzIzMGIwNzg5ZmRhYzlkYjJszgfv: 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: ]] 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.755 nvme0n1 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.755 12:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.015 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.015 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjY2N2VjODJkYjU3MTc3MzAxMTdiZDIxNzM0OGFhOTZlZGI3NzM4ZDgwNTNkZDg4uvxoig==: 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2U5YjNhOGJlZDhmNDM4ODM5OTgxNTM0MDc4MDgyYzhVFeGP: 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjY2N2VjODJkYjU3MTc3MzAxMTdiZDIxNzM0OGFhOTZlZGI3NzM4ZDgwNTNkZDg4uvxoig==: 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2U5YjNhOGJlZDhmNDM4ODM5OTgxNTM0MDc4MDgyYzhVFeGP: ]] 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2U5YjNhOGJlZDhmNDM4ODM5OTgxNTM0MDc4MDgyYzhVFeGP: 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.016 nvme0n1 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDdjMmE0MGRiZWQwNDBkYjgyYWU3NzhlZDE4Yzc1OGFhZDc5NmFiNjUxYjRmNDU2YjNmMjM5YzI0ZmRlZTliNhiqLsQ=: 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDdjMmE0MGRiZWQwNDBkYjgyYWU3NzhlZDE4Yzc1OGFhZDc5NmFiNjUxYjRmNDU2YjNmMjM5YzI0ZmRlZTliNhiqLsQ=: 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.016 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.276 nvme0n1 00:20:52.276 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.276 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:52.276 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:52.276 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.276 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.276 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.276 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.276 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:52.276 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.276 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.276 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.276 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:52.276 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:52.276 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:20:52.276 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:52.276 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:52.276 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:52.276 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:52.276 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDViZDdjZGQyODY5NzNlYWRkOGU0MDkzNzc0MDg3MWaQcc40: 00:20:52.276 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjhlNWE2Y2FkZDgxYmE5OWZkY2IxZDYwNzFiYWQ3ZDRlZTRiMzBjZDYxMjY1NGVkMmJjMmY5ZmZlNDE1MzBlNpozreI=: 00:20:52.276 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:52.276 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:52.276 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDViZDdjZGQyODY5NzNlYWRkOGU0MDkzNzc0MDg3MWaQcc40: 00:20:52.276 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjhlNWE2Y2FkZDgxYmE5OWZkY2IxZDYwNzFiYWQ3ZDRlZTRiMzBjZDYxMjY1NGVkMmJjMmY5ZmZlNDE1MzBlNpozreI=: ]] 00:20:52.276 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjhlNWE2Y2FkZDgxYmE5OWZkY2IxZDYwNzFiYWQ3ZDRlZTRiMzBjZDYxMjY1NGVkMmJjMmY5ZmZlNDE1MzBlNpozreI=: 00:20:52.276 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:20:52.276 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:52.276 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:52.276 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:52.276 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:52.276 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:52.276 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:52.276 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.276 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.276 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.276 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:52.276 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:52.276 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:52.276 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:52.276 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.276 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.276 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:52.276 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:52.276 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:52.276 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:52.277 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:52.277 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.277 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.277 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.535 nvme0n1 00:20:52.535 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.535 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:52.535 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:52.535 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.535 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.536 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.536 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.536 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:52.536 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.536 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.536 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.536 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:52.536 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:20:52.536 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:52.536 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:52.536 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:52.536 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:52.536 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2U5YjZkOGY5MWY5ODkxYmJjZTg2ODlhYmQxMWI0NWVjMGY5ZTNjZmRjM2JjYmQyV2h02A==: 00:20:52.536 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: 00:20:52.536 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:52.536 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:52.536 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2U5YjZkOGY5MWY5ODkxYmJjZTg2ODlhYmQxMWI0NWVjMGY5ZTNjZmRjM2JjYmQyV2h02A==: 00:20:52.536 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: ]] 00:20:52.536 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: 00:20:52.536 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:20:52.536 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:52.536 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:52.536 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:52.536 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:52.536 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:52.536 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:52.536 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.536 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.536 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.536 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:52.536 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:52.536 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:52.536 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:52.536 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.536 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.536 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:52.536 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:52.536 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:52.536 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:52.536 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:52.536 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.536 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.536 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.795 nvme0n1 00:20:52.795 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.795 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:52.795 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:52.795 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.795 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.795 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.795 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.795 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:52.795 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.795 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.795 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.795 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:52.795 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:20:52.795 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:52.795 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:52.795 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:52.795 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:52.795 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTEyYjM4ZWI3MDVmZGFhNzIzMGIwNzg5ZmRhYzlkYjJszgfv: 00:20:52.795 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: 00:20:52.795 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:52.795 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:52.795 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTEyYjM4ZWI3MDVmZGFhNzIzMGIwNzg5ZmRhYzlkYjJszgfv: 00:20:52.795 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: ]] 00:20:52.795 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: 00:20:52.795 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:20:52.795 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:52.795 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:52.795 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:52.795 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:52.795 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:52.795 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:52.795 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.795 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.795 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.795 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:52.795 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:52.795 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:52.795 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:52.795 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.795 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.795 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:52.795 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:52.795 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:52.796 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:52.796 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:52.796 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.796 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.796 12:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.055 nvme0n1 00:20:53.055 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.055 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:53.055 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:53.055 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.055 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.055 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.055 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.055 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:53.055 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.055 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.055 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.055 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:53.055 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:20:53.055 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:53.055 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:53.055 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:53.055 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:53.055 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjY2N2VjODJkYjU3MTc3MzAxMTdiZDIxNzM0OGFhOTZlZGI3NzM4ZDgwNTNkZDg4uvxoig==: 00:20:53.055 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2U5YjNhOGJlZDhmNDM4ODM5OTgxNTM0MDc4MDgyYzhVFeGP: 00:20:53.055 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:53.055 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:53.055 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjY2N2VjODJkYjU3MTc3MzAxMTdiZDIxNzM0OGFhOTZlZGI3NzM4ZDgwNTNkZDg4uvxoig==: 00:20:53.055 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2U5YjNhOGJlZDhmNDM4ODM5OTgxNTM0MDc4MDgyYzhVFeGP: ]] 00:20:53.055 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2U5YjNhOGJlZDhmNDM4ODM5OTgxNTM0MDc4MDgyYzhVFeGP: 00:20:53.055 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:20:53.055 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:53.055 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:53.055 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:53.055 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:53.055 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:53.055 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:53.055 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.055 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.055 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.055 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:53.055 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:53.055 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:53.055 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:53.055 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:53.055 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:53.055 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:53.055 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:53.055 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:53.055 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:53.055 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:53.055 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:53.055 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.055 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.314 nvme0n1 00:20:53.314 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.314 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:53.314 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:53.314 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.314 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.314 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.314 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.314 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:53.314 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.314 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.314 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.314 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:53.314 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:20:53.314 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:53.314 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:53.314 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:53.314 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:53.314 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDdjMmE0MGRiZWQwNDBkYjgyYWU3NzhlZDE4Yzc1OGFhZDc5NmFiNjUxYjRmNDU2YjNmMjM5YzI0ZmRlZTliNhiqLsQ=: 00:20:53.314 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:53.314 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:53.314 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:53.314 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDdjMmE0MGRiZWQwNDBkYjgyYWU3NzhlZDE4Yzc1OGFhZDc5NmFiNjUxYjRmNDU2YjNmMjM5YzI0ZmRlZTliNhiqLsQ=: 00:20:53.314 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:53.314 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:20:53.314 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:53.314 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:53.314 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:53.314 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:53.314 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:53.314 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:53.314 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.314 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.314 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.314 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:53.314 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:53.314 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:53.314 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:53.314 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:53.314 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:53.314 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:53.314 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:53.314 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:53.314 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:53.314 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:53.315 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:53.315 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.315 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.573 nvme0n1 00:20:53.573 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.573 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:53.573 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:53.573 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.573 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.573 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.573 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.573 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:53.573 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.573 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.573 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.573 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:53.573 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:53.573 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:20:53.573 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:53.573 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:53.573 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:53.573 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:53.573 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDViZDdjZGQyODY5NzNlYWRkOGU0MDkzNzc0MDg3MWaQcc40: 00:20:53.574 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjhlNWE2Y2FkZDgxYmE5OWZkY2IxZDYwNzFiYWQ3ZDRlZTRiMzBjZDYxMjY1NGVkMmJjMmY5ZmZlNDE1MzBlNpozreI=: 00:20:53.574 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:53.574 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:53.574 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDViZDdjZGQyODY5NzNlYWRkOGU0MDkzNzc0MDg3MWaQcc40: 00:20:53.574 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjhlNWE2Y2FkZDgxYmE5OWZkY2IxZDYwNzFiYWQ3ZDRlZTRiMzBjZDYxMjY1NGVkMmJjMmY5ZmZlNDE1MzBlNpozreI=: ]] 00:20:53.574 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjhlNWE2Y2FkZDgxYmE5OWZkY2IxZDYwNzFiYWQ3ZDRlZTRiMzBjZDYxMjY1NGVkMmJjMmY5ZmZlNDE1MzBlNpozreI=: 00:20:53.574 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:20:53.574 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:53.574 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:53.574 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:53.574 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:53.574 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:53.574 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:53.574 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.574 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.574 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.574 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:53.574 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:53.574 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:53.574 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:53.574 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:53.574 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:53.574 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:53.574 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:53.574 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:53.574 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:53.574 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:53.574 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.574 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.574 12:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.832 nvme0n1 00:20:53.832 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.832 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:53.832 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:53.832 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.832 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.091 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.091 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.091 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:54.091 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.091 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.091 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.091 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:54.091 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:20:54.091 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:54.091 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:54.091 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:54.091 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:54.091 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2U5YjZkOGY5MWY5ODkxYmJjZTg2ODlhYmQxMWI0NWVjMGY5ZTNjZmRjM2JjYmQyV2h02A==: 00:20:54.091 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: 00:20:54.091 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:54.091 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:54.091 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2U5YjZkOGY5MWY5ODkxYmJjZTg2ODlhYmQxMWI0NWVjMGY5ZTNjZmRjM2JjYmQyV2h02A==: 00:20:54.091 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: ]] 00:20:54.091 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: 00:20:54.091 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:20:54.092 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:54.092 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:54.092 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:54.092 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:54.092 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:54.092 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:54.092 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.092 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.092 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.092 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:54.092 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:54.092 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:54.092 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:54.092 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:54.092 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:54.092 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:54.092 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:54.092 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:54.092 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:54.092 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:54.092 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.092 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.092 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.350 nvme0n1 00:20:54.350 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.350 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:54.350 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:54.350 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.350 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.350 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.350 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.350 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:54.350 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.350 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.350 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.350 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:54.350 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:20:54.350 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:54.350 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:54.350 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:54.350 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:54.350 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTEyYjM4ZWI3MDVmZGFhNzIzMGIwNzg5ZmRhYzlkYjJszgfv: 00:20:54.350 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: 00:20:54.350 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:54.350 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:54.350 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTEyYjM4ZWI3MDVmZGFhNzIzMGIwNzg5ZmRhYzlkYjJszgfv: 00:20:54.350 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: ]] 00:20:54.350 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: 00:20:54.350 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:20:54.350 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:54.350 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:54.350 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:54.350 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:54.350 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:54.350 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:54.350 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.350 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.350 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.350 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:54.350 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:54.350 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:54.350 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:54.350 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:54.350 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:54.350 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:54.350 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:54.350 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:54.350 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:54.350 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:54.350 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.350 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.350 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.608 nvme0n1 00:20:54.608 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.608 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:54.608 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:54.608 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.608 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.867 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.868 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.868 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:54.868 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.868 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.868 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.868 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:54.868 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:20:54.868 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:54.868 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:54.868 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:54.868 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:54.868 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjY2N2VjODJkYjU3MTc3MzAxMTdiZDIxNzM0OGFhOTZlZGI3NzM4ZDgwNTNkZDg4uvxoig==: 00:20:54.868 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2U5YjNhOGJlZDhmNDM4ODM5OTgxNTM0MDc4MDgyYzhVFeGP: 00:20:54.868 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:54.868 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:54.868 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjY2N2VjODJkYjU3MTc3MzAxMTdiZDIxNzM0OGFhOTZlZGI3NzM4ZDgwNTNkZDg4uvxoig==: 00:20:54.868 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2U5YjNhOGJlZDhmNDM4ODM5OTgxNTM0MDc4MDgyYzhVFeGP: ]] 00:20:54.868 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2U5YjNhOGJlZDhmNDM4ODM5OTgxNTM0MDc4MDgyYzhVFeGP: 00:20:54.868 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:20:54.868 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:54.868 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:54.868 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:54.868 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:54.868 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:54.868 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:54.868 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.868 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.868 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.868 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:54.868 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:54.868 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:54.868 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:54.868 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:54.868 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:54.868 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:54.868 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:54.868 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:54.868 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:54.868 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:54.868 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:54.868 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.868 12:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.127 nvme0n1 00:20:55.127 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.127 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:55.127 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:55.127 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.127 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.127 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.127 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.127 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:55.127 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.127 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.127 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.127 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:55.127 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:20:55.127 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:55.127 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:55.127 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:55.127 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:55.127 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDdjMmE0MGRiZWQwNDBkYjgyYWU3NzhlZDE4Yzc1OGFhZDc5NmFiNjUxYjRmNDU2YjNmMjM5YzI0ZmRlZTliNhiqLsQ=: 00:20:55.127 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:55.127 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:55.127 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:55.127 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDdjMmE0MGRiZWQwNDBkYjgyYWU3NzhlZDE4Yzc1OGFhZDc5NmFiNjUxYjRmNDU2YjNmMjM5YzI0ZmRlZTliNhiqLsQ=: 00:20:55.127 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:55.127 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:20:55.127 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:55.128 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:55.128 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:55.128 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:55.128 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:55.128 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:55.128 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.128 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.128 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.128 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:55.128 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:55.128 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:55.128 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:55.128 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:55.128 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:55.128 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:55.128 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:55.128 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:55.128 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:55.128 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:55.128 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:55.128 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.128 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.387 nvme0n1 00:20:55.387 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.647 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:55.647 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.647 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:55.647 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.647 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.647 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.647 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:55.647 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.647 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.647 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.647 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:55.647 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:55.647 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:20:55.647 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:55.647 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:55.647 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:55.647 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:55.647 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDViZDdjZGQyODY5NzNlYWRkOGU0MDkzNzc0MDg3MWaQcc40: 00:20:55.647 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjhlNWE2Y2FkZDgxYmE5OWZkY2IxZDYwNzFiYWQ3ZDRlZTRiMzBjZDYxMjY1NGVkMmJjMmY5ZmZlNDE1MzBlNpozreI=: 00:20:55.647 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:55.647 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:55.647 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDViZDdjZGQyODY5NzNlYWRkOGU0MDkzNzc0MDg3MWaQcc40: 00:20:55.647 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjhlNWE2Y2FkZDgxYmE5OWZkY2IxZDYwNzFiYWQ3ZDRlZTRiMzBjZDYxMjY1NGVkMmJjMmY5ZmZlNDE1MzBlNpozreI=: ]] 00:20:55.647 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjhlNWE2Y2FkZDgxYmE5OWZkY2IxZDYwNzFiYWQ3ZDRlZTRiMzBjZDYxMjY1NGVkMmJjMmY5ZmZlNDE1MzBlNpozreI=: 00:20:55.647 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:20:55.647 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:55.647 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:55.647 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:55.647 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:55.648 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:55.648 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:55.648 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.648 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.648 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.648 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:55.648 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:55.648 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:55.648 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:55.648 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:55.648 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:55.648 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:55.648 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:55.648 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:55.648 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:55.648 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:55.648 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.648 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.648 12:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.232 nvme0n1 00:20:56.232 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.232 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:56.232 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.232 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:56.232 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.232 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.232 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.232 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:56.232 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.232 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.232 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.232 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:56.232 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:20:56.232 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:56.232 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:56.232 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:56.232 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:56.232 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2U5YjZkOGY5MWY5ODkxYmJjZTg2ODlhYmQxMWI0NWVjMGY5ZTNjZmRjM2JjYmQyV2h02A==: 00:20:56.232 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: 00:20:56.232 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:56.232 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:56.232 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2U5YjZkOGY5MWY5ODkxYmJjZTg2ODlhYmQxMWI0NWVjMGY5ZTNjZmRjM2JjYmQyV2h02A==: 00:20:56.232 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: ]] 00:20:56.232 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: 00:20:56.232 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:20:56.232 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:56.232 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:56.232 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:56.232 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:56.232 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:56.232 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:56.232 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.232 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.232 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.232 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:56.232 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:56.232 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:56.232 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:56.232 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:56.232 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:56.233 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:56.233 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:56.233 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:56.233 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:56.233 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:56.233 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.233 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.233 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.801 nvme0n1 00:20:56.801 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.801 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:56.801 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:56.801 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.801 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.801 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.801 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.801 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:56.801 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.801 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.801 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.801 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:56.801 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:20:56.801 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:56.801 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:56.801 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:56.801 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:56.801 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTEyYjM4ZWI3MDVmZGFhNzIzMGIwNzg5ZmRhYzlkYjJszgfv: 00:20:56.801 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: 00:20:56.801 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:56.801 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:56.801 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTEyYjM4ZWI3MDVmZGFhNzIzMGIwNzg5ZmRhYzlkYjJszgfv: 00:20:56.801 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: ]] 00:20:56.801 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: 00:20:56.801 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:20:56.801 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:56.801 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:56.801 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:56.801 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:56.801 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:56.801 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:56.801 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.802 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.802 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.802 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:56.802 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:56.802 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:56.802 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:56.802 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:56.802 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:56.802 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:56.802 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:56.802 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:56.802 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:56.802 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:56.802 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.802 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.802 12:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.370 nvme0n1 00:20:57.370 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.370 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:57.370 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.370 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:57.370 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.370 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.370 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.370 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:57.370 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.370 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.370 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.370 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:57.370 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:20:57.370 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:57.370 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:57.370 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:57.370 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:57.370 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjY2N2VjODJkYjU3MTc3MzAxMTdiZDIxNzM0OGFhOTZlZGI3NzM4ZDgwNTNkZDg4uvxoig==: 00:20:57.370 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2U5YjNhOGJlZDhmNDM4ODM5OTgxNTM0MDc4MDgyYzhVFeGP: 00:20:57.370 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:57.370 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:57.370 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjY2N2VjODJkYjU3MTc3MzAxMTdiZDIxNzM0OGFhOTZlZGI3NzM4ZDgwNTNkZDg4uvxoig==: 00:20:57.370 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2U5YjNhOGJlZDhmNDM4ODM5OTgxNTM0MDc4MDgyYzhVFeGP: ]] 00:20:57.370 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2U5YjNhOGJlZDhmNDM4ODM5OTgxNTM0MDc4MDgyYzhVFeGP: 00:20:57.370 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:20:57.370 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:57.370 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:57.370 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:57.370 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:57.370 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:57.370 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:57.370 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.370 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.370 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.370 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:57.370 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:57.370 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:57.370 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:57.370 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:57.370 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:57.370 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:57.370 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:57.370 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:57.370 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:57.370 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:57.370 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:57.370 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.370 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.938 nvme0n1 00:20:57.938 12:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.938 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:57.938 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.938 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.938 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:57.938 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.938 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.938 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:57.938 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.938 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.938 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.938 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:57.938 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:20:57.938 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:57.938 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:57.938 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:57.938 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:57.938 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDdjMmE0MGRiZWQwNDBkYjgyYWU3NzhlZDE4Yzc1OGFhZDc5NmFiNjUxYjRmNDU2YjNmMjM5YzI0ZmRlZTliNhiqLsQ=: 00:20:57.938 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:57.938 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:57.938 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:57.938 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDdjMmE0MGRiZWQwNDBkYjgyYWU3NzhlZDE4Yzc1OGFhZDc5NmFiNjUxYjRmNDU2YjNmMjM5YzI0ZmRlZTliNhiqLsQ=: 00:20:57.938 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:57.939 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:20:57.939 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:57.939 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:57.939 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:57.939 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:57.939 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:57.939 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:57.939 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.939 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.939 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.939 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:57.939 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:57.939 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:57.939 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:57.939 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:57.939 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:57.939 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:57.939 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:57.939 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:57.939 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:57.939 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:57.939 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:57.939 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.939 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.507 nvme0n1 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDViZDdjZGQyODY5NzNlYWRkOGU0MDkzNzc0MDg3MWaQcc40: 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjhlNWE2Y2FkZDgxYmE5OWZkY2IxZDYwNzFiYWQ3ZDRlZTRiMzBjZDYxMjY1NGVkMmJjMmY5ZmZlNDE1MzBlNpozreI=: 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDViZDdjZGQyODY5NzNlYWRkOGU0MDkzNzc0MDg3MWaQcc40: 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjhlNWE2Y2FkZDgxYmE5OWZkY2IxZDYwNzFiYWQ3ZDRlZTRiMzBjZDYxMjY1NGVkMmJjMmY5ZmZlNDE1MzBlNpozreI=: ]] 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjhlNWE2Y2FkZDgxYmE5OWZkY2IxZDYwNzFiYWQ3ZDRlZTRiMzBjZDYxMjY1NGVkMmJjMmY5ZmZlNDE1MzBlNpozreI=: 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.507 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.767 nvme0n1 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2U5YjZkOGY5MWY5ODkxYmJjZTg2ODlhYmQxMWI0NWVjMGY5ZTNjZmRjM2JjYmQyV2h02A==: 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2U5YjZkOGY5MWY5ODkxYmJjZTg2ODlhYmQxMWI0NWVjMGY5ZTNjZmRjM2JjYmQyV2h02A==: 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: ]] 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.767 nvme0n1 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.767 12:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTEyYjM4ZWI3MDVmZGFhNzIzMGIwNzg5ZmRhYzlkYjJszgfv: 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTEyYjM4ZWI3MDVmZGFhNzIzMGIwNzg5ZmRhYzlkYjJszgfv: 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: ]] 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.027 nvme0n1 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.027 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.028 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:59.028 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:20:59.028 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:59.028 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:59.028 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:59.028 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:59.028 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjY2N2VjODJkYjU3MTc3MzAxMTdiZDIxNzM0OGFhOTZlZGI3NzM4ZDgwNTNkZDg4uvxoig==: 00:20:59.028 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2U5YjNhOGJlZDhmNDM4ODM5OTgxNTM0MDc4MDgyYzhVFeGP: 00:20:59.028 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:59.028 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:59.028 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjY2N2VjODJkYjU3MTc3MzAxMTdiZDIxNzM0OGFhOTZlZGI3NzM4ZDgwNTNkZDg4uvxoig==: 00:20:59.028 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2U5YjNhOGJlZDhmNDM4ODM5OTgxNTM0MDc4MDgyYzhVFeGP: ]] 00:20:59.028 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2U5YjNhOGJlZDhmNDM4ODM5OTgxNTM0MDc4MDgyYzhVFeGP: 00:20:59.028 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:20:59.028 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:59.028 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:59.028 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:59.028 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:59.028 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:59.028 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:59.028 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.028 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.028 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.028 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:59.028 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:59.028 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:59.028 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:59.028 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:59.028 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:59.028 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:59.028 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:59.028 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:59.028 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:59.028 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:59.028 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:59.028 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.028 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.287 nvme0n1 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDdjMmE0MGRiZWQwNDBkYjgyYWU3NzhlZDE4Yzc1OGFhZDc5NmFiNjUxYjRmNDU2YjNmMjM5YzI0ZmRlZTliNhiqLsQ=: 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDdjMmE0MGRiZWQwNDBkYjgyYWU3NzhlZDE4Yzc1OGFhZDc5NmFiNjUxYjRmNDU2YjNmMjM5YzI0ZmRlZTliNhiqLsQ=: 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.287 nvme0n1 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.287 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.547 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.547 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:59.547 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.547 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.547 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.547 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:59.547 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:59.547 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:20:59.547 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:59.547 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:59.547 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:59.547 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:59.547 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDViZDdjZGQyODY5NzNlYWRkOGU0MDkzNzc0MDg3MWaQcc40: 00:20:59.547 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjhlNWE2Y2FkZDgxYmE5OWZkY2IxZDYwNzFiYWQ3ZDRlZTRiMzBjZDYxMjY1NGVkMmJjMmY5ZmZlNDE1MzBlNpozreI=: 00:20:59.547 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:59.547 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:59.547 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDViZDdjZGQyODY5NzNlYWRkOGU0MDkzNzc0MDg3MWaQcc40: 00:20:59.547 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjhlNWE2Y2FkZDgxYmE5OWZkY2IxZDYwNzFiYWQ3ZDRlZTRiMzBjZDYxMjY1NGVkMmJjMmY5ZmZlNDE1MzBlNpozreI=: ]] 00:20:59.547 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjhlNWE2Y2FkZDgxYmE5OWZkY2IxZDYwNzFiYWQ3ZDRlZTRiMzBjZDYxMjY1NGVkMmJjMmY5ZmZlNDE1MzBlNpozreI=: 00:20:59.547 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:20:59.547 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:59.547 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:59.547 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:59.547 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:59.547 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:59.547 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:59.547 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.548 nvme0n1 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2U5YjZkOGY5MWY5ODkxYmJjZTg2ODlhYmQxMWI0NWVjMGY5ZTNjZmRjM2JjYmQyV2h02A==: 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2U5YjZkOGY5MWY5ODkxYmJjZTg2ODlhYmQxMWI0NWVjMGY5ZTNjZmRjM2JjYmQyV2h02A==: 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: ]] 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.548 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.807 nvme0n1 00:20:59.807 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.807 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:59.807 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:59.807 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.807 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.807 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.807 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.807 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:59.807 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.807 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.807 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.807 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:59.807 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:20:59.807 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:59.807 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:59.807 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:59.807 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:59.807 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTEyYjM4ZWI3MDVmZGFhNzIzMGIwNzg5ZmRhYzlkYjJszgfv: 00:20:59.807 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: 00:20:59.807 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:59.807 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:59.807 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTEyYjM4ZWI3MDVmZGFhNzIzMGIwNzg5ZmRhYzlkYjJszgfv: 00:20:59.807 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: ]] 00:20:59.807 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: 00:20:59.807 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:20:59.807 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:59.807 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:59.807 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:59.807 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:59.807 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:59.807 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:59.807 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.807 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.807 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.807 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:59.807 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:59.807 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:59.807 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:59.807 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:59.807 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:59.807 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:59.807 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:59.807 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:59.807 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:59.807 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:59.807 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.808 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.808 12:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.067 nvme0n1 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjY2N2VjODJkYjU3MTc3MzAxMTdiZDIxNzM0OGFhOTZlZGI3NzM4ZDgwNTNkZDg4uvxoig==: 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2U5YjNhOGJlZDhmNDM4ODM5OTgxNTM0MDc4MDgyYzhVFeGP: 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjY2N2VjODJkYjU3MTc3MzAxMTdiZDIxNzM0OGFhOTZlZGI3NzM4ZDgwNTNkZDg4uvxoig==: 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2U5YjNhOGJlZDhmNDM4ODM5OTgxNTM0MDc4MDgyYzhVFeGP: ]] 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2U5YjNhOGJlZDhmNDM4ODM5OTgxNTM0MDc4MDgyYzhVFeGP: 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.067 nvme0n1 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:00.067 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDdjMmE0MGRiZWQwNDBkYjgyYWU3NzhlZDE4Yzc1OGFhZDc5NmFiNjUxYjRmNDU2YjNmMjM5YzI0ZmRlZTliNhiqLsQ=: 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDdjMmE0MGRiZWQwNDBkYjgyYWU3NzhlZDE4Yzc1OGFhZDc5NmFiNjUxYjRmNDU2YjNmMjM5YzI0ZmRlZTliNhiqLsQ=: 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.327 nvme0n1 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:00.327 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDViZDdjZGQyODY5NzNlYWRkOGU0MDkzNzc0MDg3MWaQcc40: 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjhlNWE2Y2FkZDgxYmE5OWZkY2IxZDYwNzFiYWQ3ZDRlZTRiMzBjZDYxMjY1NGVkMmJjMmY5ZmZlNDE1MzBlNpozreI=: 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDViZDdjZGQyODY5NzNlYWRkOGU0MDkzNzc0MDg3MWaQcc40: 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjhlNWE2Y2FkZDgxYmE5OWZkY2IxZDYwNzFiYWQ3ZDRlZTRiMzBjZDYxMjY1NGVkMmJjMmY5ZmZlNDE1MzBlNpozreI=: ]] 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjhlNWE2Y2FkZDgxYmE5OWZkY2IxZDYwNzFiYWQ3ZDRlZTRiMzBjZDYxMjY1NGVkMmJjMmY5ZmZlNDE1MzBlNpozreI=: 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.587 nvme0n1 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2U5YjZkOGY5MWY5ODkxYmJjZTg2ODlhYmQxMWI0NWVjMGY5ZTNjZmRjM2JjYmQyV2h02A==: 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2U5YjZkOGY5MWY5ODkxYmJjZTg2ODlhYmQxMWI0NWVjMGY5ZTNjZmRjM2JjYmQyV2h02A==: 00:21:00.587 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: ]] 00:21:00.847 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: 00:21:00.847 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:21:00.847 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:00.847 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:00.847 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:00.847 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:00.847 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:00.847 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:00.847 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.847 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.847 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.847 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:00.847 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:00.847 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:00.847 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:00.847 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:00.847 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:00.847 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:00.847 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:00.847 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:00.847 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:00.847 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:00.847 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.847 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.847 12:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.847 nvme0n1 00:21:00.847 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.847 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:00.847 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:00.847 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.847 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.847 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.847 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.847 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:00.847 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.847 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.847 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.847 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:00.847 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:21:00.847 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:00.847 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:00.847 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:00.847 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:00.847 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTEyYjM4ZWI3MDVmZGFhNzIzMGIwNzg5ZmRhYzlkYjJszgfv: 00:21:00.847 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: 00:21:00.847 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:00.847 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:00.847 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTEyYjM4ZWI3MDVmZGFhNzIzMGIwNzg5ZmRhYzlkYjJszgfv: 00:21:00.847 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: ]] 00:21:00.847 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: 00:21:00.847 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:21:00.847 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:00.847 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:00.847 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:00.847 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:00.847 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:00.847 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:00.847 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.847 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.106 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.106 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:01.106 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:01.106 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:01.106 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:01.106 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:01.106 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:01.106 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:01.106 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:01.107 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:01.107 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:01.107 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:01.107 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.107 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.107 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.107 nvme0n1 00:21:01.107 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.107 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:01.107 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:01.107 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.107 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.107 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.107 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.107 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:01.107 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.107 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.107 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.107 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:01.107 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:21:01.107 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:01.107 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:01.107 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:01.107 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:01.107 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjY2N2VjODJkYjU3MTc3MzAxMTdiZDIxNzM0OGFhOTZlZGI3NzM4ZDgwNTNkZDg4uvxoig==: 00:21:01.107 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2U5YjNhOGJlZDhmNDM4ODM5OTgxNTM0MDc4MDgyYzhVFeGP: 00:21:01.107 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:01.107 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:01.107 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjY2N2VjODJkYjU3MTc3MzAxMTdiZDIxNzM0OGFhOTZlZGI3NzM4ZDgwNTNkZDg4uvxoig==: 00:21:01.107 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2U5YjNhOGJlZDhmNDM4ODM5OTgxNTM0MDc4MDgyYzhVFeGP: ]] 00:21:01.107 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2U5YjNhOGJlZDhmNDM4ODM5OTgxNTM0MDc4MDgyYzhVFeGP: 00:21:01.107 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:21:01.107 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:01.107 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:01.107 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:01.107 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:01.107 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:01.107 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:01.107 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.107 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.366 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.366 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.367 nvme0n1 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDdjMmE0MGRiZWQwNDBkYjgyYWU3NzhlZDE4Yzc1OGFhZDc5NmFiNjUxYjRmNDU2YjNmMjM5YzI0ZmRlZTliNhiqLsQ=: 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDdjMmE0MGRiZWQwNDBkYjgyYWU3NzhlZDE4Yzc1OGFhZDc5NmFiNjUxYjRmNDU2YjNmMjM5YzI0ZmRlZTliNhiqLsQ=: 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:01.367 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:01.626 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:01.626 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:01.626 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:01.626 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:01.626 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:01.626 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:01.626 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:01.626 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.626 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.626 nvme0n1 00:21:01.626 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.626 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:01.626 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:01.627 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.627 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.627 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.627 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.627 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:01.627 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.627 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.627 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.627 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:01.627 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:01.627 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:21:01.627 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:01.627 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:01.627 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:01.627 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:01.627 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDViZDdjZGQyODY5NzNlYWRkOGU0MDkzNzc0MDg3MWaQcc40: 00:21:01.627 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjhlNWE2Y2FkZDgxYmE5OWZkY2IxZDYwNzFiYWQ3ZDRlZTRiMzBjZDYxMjY1NGVkMmJjMmY5ZmZlNDE1MzBlNpozreI=: 00:21:01.627 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:01.627 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:01.627 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDViZDdjZGQyODY5NzNlYWRkOGU0MDkzNzc0MDg3MWaQcc40: 00:21:01.627 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjhlNWE2Y2FkZDgxYmE5OWZkY2IxZDYwNzFiYWQ3ZDRlZTRiMzBjZDYxMjY1NGVkMmJjMmY5ZmZlNDE1MzBlNpozreI=: ]] 00:21:01.627 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjhlNWE2Y2FkZDgxYmE5OWZkY2IxZDYwNzFiYWQ3ZDRlZTRiMzBjZDYxMjY1NGVkMmJjMmY5ZmZlNDE1MzBlNpozreI=: 00:21:01.627 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:21:01.627 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:01.627 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:01.627 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:01.627 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:01.627 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:01.627 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:01.627 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.627 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.627 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.627 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:01.627 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:01.627 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:01.627 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:01.627 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:01.627 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:01.627 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:01.627 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:01.627 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:01.627 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:01.627 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:01.627 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.627 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.627 12:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.195 nvme0n1 00:21:02.195 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.195 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:02.195 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.195 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:02.195 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.195 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.195 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.195 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:02.195 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.195 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.195 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.195 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:02.195 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:21:02.195 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:02.195 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:02.195 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:02.195 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:02.195 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2U5YjZkOGY5MWY5ODkxYmJjZTg2ODlhYmQxMWI0NWVjMGY5ZTNjZmRjM2JjYmQyV2h02A==: 00:21:02.195 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: 00:21:02.195 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:02.195 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:02.195 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2U5YjZkOGY5MWY5ODkxYmJjZTg2ODlhYmQxMWI0NWVjMGY5ZTNjZmRjM2JjYmQyV2h02A==: 00:21:02.195 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: ]] 00:21:02.195 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: 00:21:02.195 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:21:02.195 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:02.195 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:02.195 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:02.195 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:02.195 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:02.195 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:02.195 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.195 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.195 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.195 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:02.195 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:02.195 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:02.195 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:02.195 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:02.195 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:02.195 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:02.195 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:02.195 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:02.195 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:02.195 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:02.195 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.195 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.195 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.460 nvme0n1 00:21:02.460 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.460 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:02.460 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:02.460 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.460 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.460 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.460 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.460 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:02.460 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.460 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.460 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.460 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:02.460 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:21:02.460 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:02.460 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:02.461 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:02.461 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:02.461 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTEyYjM4ZWI3MDVmZGFhNzIzMGIwNzg5ZmRhYzlkYjJszgfv: 00:21:02.461 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: 00:21:02.461 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:02.461 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:02.461 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTEyYjM4ZWI3MDVmZGFhNzIzMGIwNzg5ZmRhYzlkYjJszgfv: 00:21:02.461 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: ]] 00:21:02.461 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: 00:21:02.461 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:21:02.461 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:02.461 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:02.461 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:02.461 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:02.461 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:02.461 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:02.461 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.461 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.461 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.461 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:02.461 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:02.461 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:02.461 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:02.461 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:02.461 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:02.461 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:02.461 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:02.461 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:02.461 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:02.461 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:02.461 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.461 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.461 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.771 nvme0n1 00:21:02.771 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.771 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:02.771 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:02.771 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.771 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.771 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.771 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.771 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:02.771 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.771 12:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.771 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.771 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:02.771 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:21:03.031 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:03.031 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:03.031 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:03.031 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:03.031 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjY2N2VjODJkYjU3MTc3MzAxMTdiZDIxNzM0OGFhOTZlZGI3NzM4ZDgwNTNkZDg4uvxoig==: 00:21:03.031 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2U5YjNhOGJlZDhmNDM4ODM5OTgxNTM0MDc4MDgyYzhVFeGP: 00:21:03.031 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:03.031 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:03.031 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjY2N2VjODJkYjU3MTc3MzAxMTdiZDIxNzM0OGFhOTZlZGI3NzM4ZDgwNTNkZDg4uvxoig==: 00:21:03.031 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2U5YjNhOGJlZDhmNDM4ODM5OTgxNTM0MDc4MDgyYzhVFeGP: ]] 00:21:03.031 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2U5YjNhOGJlZDhmNDM4ODM5OTgxNTM0MDc4MDgyYzhVFeGP: 00:21:03.031 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:21:03.031 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:03.031 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:03.031 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:03.031 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:03.031 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:03.031 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:03.031 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.031 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.031 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.031 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:03.031 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:03.031 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:03.031 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:03.031 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:03.031 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:03.031 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:03.031 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:03.031 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:03.031 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:03.031 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:03.031 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:03.031 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.031 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.290 nvme0n1 00:21:03.290 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.290 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:03.290 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.290 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:03.290 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.290 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.290 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.290 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:03.290 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.290 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.290 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.290 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:03.290 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:21:03.290 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:03.290 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:03.290 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:03.290 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:03.290 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDdjMmE0MGRiZWQwNDBkYjgyYWU3NzhlZDE4Yzc1OGFhZDc5NmFiNjUxYjRmNDU2YjNmMjM5YzI0ZmRlZTliNhiqLsQ=: 00:21:03.290 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:03.290 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:03.290 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:03.290 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDdjMmE0MGRiZWQwNDBkYjgyYWU3NzhlZDE4Yzc1OGFhZDc5NmFiNjUxYjRmNDU2YjNmMjM5YzI0ZmRlZTliNhiqLsQ=: 00:21:03.290 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:03.290 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:21:03.290 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:03.290 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:03.290 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:03.290 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:03.290 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:03.290 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:03.290 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.290 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.290 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.290 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:03.290 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:03.290 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:03.290 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:03.290 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:03.291 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:03.291 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:03.291 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:03.291 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:03.291 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:03.291 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:03.291 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:03.291 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.291 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.550 nvme0n1 00:21:03.550 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.550 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:03.550 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.550 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:03.550 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.550 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.550 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.550 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:03.550 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.550 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.550 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.550 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:03.550 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:03.550 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:21:03.550 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:03.550 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:03.550 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:03.550 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:03.550 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDViZDdjZGQyODY5NzNlYWRkOGU0MDkzNzc0MDg3MWaQcc40: 00:21:03.550 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjhlNWE2Y2FkZDgxYmE5OWZkY2IxZDYwNzFiYWQ3ZDRlZTRiMzBjZDYxMjY1NGVkMmJjMmY5ZmZlNDE1MzBlNpozreI=: 00:21:03.550 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:03.550 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:03.550 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDViZDdjZGQyODY5NzNlYWRkOGU0MDkzNzc0MDg3MWaQcc40: 00:21:03.550 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjhlNWE2Y2FkZDgxYmE5OWZkY2IxZDYwNzFiYWQ3ZDRlZTRiMzBjZDYxMjY1NGVkMmJjMmY5ZmZlNDE1MzBlNpozreI=: ]] 00:21:03.550 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjhlNWE2Y2FkZDgxYmE5OWZkY2IxZDYwNzFiYWQ3ZDRlZTRiMzBjZDYxMjY1NGVkMmJjMmY5ZmZlNDE1MzBlNpozreI=: 00:21:03.550 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:21:03.550 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:03.550 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:03.550 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:03.550 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:03.550 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:03.550 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:03.550 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.550 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.550 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.809 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:03.809 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:03.809 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:03.809 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:03.809 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:03.809 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:03.809 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:03.809 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:03.809 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:03.809 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:03.809 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:03.809 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.809 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.809 12:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.068 nvme0n1 00:21:04.068 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.068 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:04.068 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:04.068 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.068 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.325 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.325 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.325 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:04.325 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.325 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.325 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.325 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:04.325 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:21:04.325 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:04.325 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:04.325 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:04.325 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:04.326 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2U5YjZkOGY5MWY5ODkxYmJjZTg2ODlhYmQxMWI0NWVjMGY5ZTNjZmRjM2JjYmQyV2h02A==: 00:21:04.326 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: 00:21:04.326 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:04.326 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:04.326 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2U5YjZkOGY5MWY5ODkxYmJjZTg2ODlhYmQxMWI0NWVjMGY5ZTNjZmRjM2JjYmQyV2h02A==: 00:21:04.326 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: ]] 00:21:04.326 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: 00:21:04.326 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:21:04.326 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:04.326 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:04.326 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:04.326 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:04.326 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:04.326 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:04.326 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.326 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.326 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.326 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:04.326 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:04.326 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:04.326 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:04.326 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:04.326 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:04.326 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:04.326 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:04.326 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:04.326 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:04.326 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:04.326 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.326 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.326 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.892 nvme0n1 00:21:04.892 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.892 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:04.892 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.893 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.893 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:04.893 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.893 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.893 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:04.893 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.893 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.893 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.893 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:04.893 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:21:04.893 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:04.893 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:04.893 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:04.893 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:04.893 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTEyYjM4ZWI3MDVmZGFhNzIzMGIwNzg5ZmRhYzlkYjJszgfv: 00:21:04.893 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: 00:21:04.893 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:04.893 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:04.893 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTEyYjM4ZWI3MDVmZGFhNzIzMGIwNzg5ZmRhYzlkYjJszgfv: 00:21:04.893 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: ]] 00:21:04.893 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: 00:21:04.893 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:21:04.893 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:04.893 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:04.893 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:04.893 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:04.893 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:04.893 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:04.893 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.893 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.893 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.893 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:04.893 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:04.893 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:04.893 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:04.893 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:04.893 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:04.893 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:04.893 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:04.893 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:04.893 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:04.893 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:04.893 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.893 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.893 12:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.460 nvme0n1 00:21:05.460 12:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.460 12:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:05.460 12:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:05.460 12:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.460 12:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.460 12:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.460 12:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.460 12:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:05.460 12:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.460 12:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.460 12:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.460 12:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:05.460 12:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:21:05.460 12:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:05.460 12:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:05.460 12:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:05.460 12:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:05.460 12:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NjY2N2VjODJkYjU3MTc3MzAxMTdiZDIxNzM0OGFhOTZlZGI3NzM4ZDgwNTNkZDg4uvxoig==: 00:21:05.460 12:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2U5YjNhOGJlZDhmNDM4ODM5OTgxNTM0MDc4MDgyYzhVFeGP: 00:21:05.460 12:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:05.460 12:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:05.460 12:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NjY2N2VjODJkYjU3MTc3MzAxMTdiZDIxNzM0OGFhOTZlZGI3NzM4ZDgwNTNkZDg4uvxoig==: 00:21:05.460 12:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2U5YjNhOGJlZDhmNDM4ODM5OTgxNTM0MDc4MDgyYzhVFeGP: ]] 00:21:05.460 12:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2U5YjNhOGJlZDhmNDM4ODM5OTgxNTM0MDc4MDgyYzhVFeGP: 00:21:05.460 12:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:21:05.460 12:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:05.460 12:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:05.460 12:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:05.460 12:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:05.460 12:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:05.460 12:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:05.460 12:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.460 12:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.460 12:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.460 12:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:05.460 12:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:05.460 12:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:05.460 12:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:05.460 12:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:05.460 12:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:05.460 12:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:05.460 12:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:05.460 12:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:05.460 12:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:05.460 12:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:05.460 12:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:05.460 12:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.460 12:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.028 nvme0n1 00:21:06.028 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.028 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.028 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.028 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.028 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.028 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.028 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.028 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.028 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.028 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.028 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.028 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:06.028 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:21:06.028 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.028 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:06.028 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:06.028 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:06.028 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDdjMmE0MGRiZWQwNDBkYjgyYWU3NzhlZDE4Yzc1OGFhZDc5NmFiNjUxYjRmNDU2YjNmMjM5YzI0ZmRlZTliNhiqLsQ=: 00:21:06.028 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:06.028 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:06.028 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:06.028 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDdjMmE0MGRiZWQwNDBkYjgyYWU3NzhlZDE4Yzc1OGFhZDc5NmFiNjUxYjRmNDU2YjNmMjM5YzI0ZmRlZTliNhiqLsQ=: 00:21:06.028 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:06.028 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:21:06.028 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:06.028 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:06.028 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:06.028 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:06.028 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:06.028 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:06.028 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.028 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.028 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.028 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:06.028 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:06.028 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:06.028 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:06.028 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.028 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.028 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:06.028 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.028 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:06.028 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:06.028 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:06.028 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:06.028 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.028 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.597 nvme0n1 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2U5YjZkOGY5MWY5ODkxYmJjZTg2ODlhYmQxMWI0NWVjMGY5ZTNjZmRjM2JjYmQyV2h02A==: 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2U5YjZkOGY5MWY5ODkxYmJjZTg2ODlhYmQxMWI0NWVjMGY5ZTNjZmRjM2JjYmQyV2h02A==: 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: ]] 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.597 request: 00:21:06.597 { 00:21:06.597 "name": "nvme0", 00:21:06.597 "trtype": "tcp", 00:21:06.597 "traddr": "10.0.0.1", 00:21:06.597 "adrfam": "ipv4", 00:21:06.597 "trsvcid": "4420", 00:21:06.597 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:06.597 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:06.597 "prchk_reftag": false, 00:21:06.597 "prchk_guard": false, 00:21:06.597 "hdgst": false, 00:21:06.597 "ddgst": false, 00:21:06.597 "allow_unrecognized_csi": false, 00:21:06.597 "method": "bdev_nvme_attach_controller", 00:21:06.597 "req_id": 1 00:21:06.597 } 00:21:06.597 Got JSON-RPC error response 00:21:06.597 response: 00:21:06.597 { 00:21:06.597 "code": -5, 00:21:06.597 "message": "Input/output error" 00:21:06.597 } 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:21:06.597 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:21:06.598 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:06.598 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:06.598 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:06.598 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.598 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.598 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:06.598 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.598 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:06.598 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:06.598 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:06.598 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:06.598 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:21:06.598 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:06.598 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:06.598 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:06.598 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:06.598 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:06.598 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:06.598 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.598 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.857 request: 00:21:06.857 { 00:21:06.857 "name": "nvme0", 00:21:06.857 "trtype": "tcp", 00:21:06.857 "traddr": "10.0.0.1", 00:21:06.857 "adrfam": "ipv4", 00:21:06.857 "trsvcid": "4420", 00:21:06.857 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:06.857 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:06.857 "prchk_reftag": false, 00:21:06.857 "prchk_guard": false, 00:21:06.857 "hdgst": false, 00:21:06.857 "ddgst": false, 00:21:06.857 "dhchap_key": "key2", 00:21:06.857 "allow_unrecognized_csi": false, 00:21:06.857 "method": "bdev_nvme_attach_controller", 00:21:06.857 "req_id": 1 00:21:06.857 } 00:21:06.857 Got JSON-RPC error response 00:21:06.857 response: 00:21:06.857 { 00:21:06.857 "code": -5, 00:21:06.857 "message": "Input/output error" 00:21:06.857 } 00:21:06.857 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:06.857 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:21:06.857 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:06.857 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:06.857 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:06.857 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.857 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:21:06.857 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.857 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.857 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.857 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:21:06.857 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:21:06.857 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:06.858 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:06.858 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:06.858 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.858 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.858 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:06.858 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.858 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:06.858 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:06.858 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:06.858 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:06.858 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:21:06.858 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:06.858 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:06.858 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:06.858 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:06.858 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:06.858 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:06.858 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.858 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.858 request: 00:21:06.858 { 00:21:06.858 "name": "nvme0", 00:21:06.858 "trtype": "tcp", 00:21:06.858 "traddr": "10.0.0.1", 00:21:06.858 "adrfam": "ipv4", 00:21:06.858 "trsvcid": "4420", 00:21:06.858 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:06.858 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:06.858 "prchk_reftag": false, 00:21:06.858 "prchk_guard": false, 00:21:06.858 "hdgst": false, 00:21:06.858 "ddgst": false, 00:21:06.858 "dhchap_key": "key1", 00:21:06.858 "dhchap_ctrlr_key": "ckey2", 00:21:06.858 "allow_unrecognized_csi": false, 00:21:06.858 "method": "bdev_nvme_attach_controller", 00:21:06.858 "req_id": 1 00:21:06.858 } 00:21:06.858 Got JSON-RPC error response 00:21:06.858 response: 00:21:06.858 { 00:21:06.858 "code": -5, 00:21:06.858 "message": "Input/output error" 00:21:06.858 } 00:21:06.858 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:06.858 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:21:06.858 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:06.858 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:06.858 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:06.858 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:21:06.858 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:06.858 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:06.858 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:06.858 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.858 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.858 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:06.858 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.858 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:06.858 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:06.858 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:06.858 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:06.858 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.858 12:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.858 nvme0n1 00:21:06.858 12:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.858 12:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:21:06.858 12:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.858 12:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:06.858 12:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:06.858 12:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:06.858 12:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTEyYjM4ZWI3MDVmZGFhNzIzMGIwNzg5ZmRhYzlkYjJszgfv: 00:21:06.858 12:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: 00:21:06.858 12:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:06.858 12:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:06.858 12:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTEyYjM4ZWI3MDVmZGFhNzIzMGIwNzg5ZmRhYzlkYjJszgfv: 00:21:06.858 12:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: ]] 00:21:06.858 12:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: 00:21:06.858 12:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.858 12:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.858 12:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.858 12:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.858 12:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.858 12:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:21:06.858 12:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.858 12:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.858 12:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.118 12:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.118 12:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:07.118 12:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:21:07.118 12:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:07.118 12:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:07.118 12:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:07.118 12:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:07.118 12:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:07.118 12:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:07.118 12:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.118 12:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.118 request: 00:21:07.118 { 00:21:07.118 "name": "nvme0", 00:21:07.118 "dhchap_key": "key1", 00:21:07.118 "dhchap_ctrlr_key": "ckey2", 00:21:07.118 "method": "bdev_nvme_set_keys", 00:21:07.118 "req_id": 1 00:21:07.118 } 00:21:07.118 Got JSON-RPC error response 00:21:07.118 response: 00:21:07.118 { 00:21:07.118 "code": -13, 00:21:07.118 "message": "Permission denied" 00:21:07.118 } 00:21:07.118 12:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:07.118 12:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:21:07.118 12:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:07.118 12:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:07.118 12:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:07.118 12:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:21:07.118 12:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.118 12:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.118 12:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:21:07.118 12:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.118 12:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:21:07.118 12:41:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:21:08.054 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:21:08.054 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.054 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:21:08.054 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.054 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.054 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:21:08.054 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:08.054 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:08.054 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:08.054 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:08.054 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:08.054 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2U5YjZkOGY5MWY5ODkxYmJjZTg2ODlhYmQxMWI0NWVjMGY5ZTNjZmRjM2JjYmQyV2h02A==: 00:21:08.054 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: 00:21:08.054 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:08.054 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:08.054 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2U5YjZkOGY5MWY5ODkxYmJjZTg2ODlhYmQxMWI0NWVjMGY5ZTNjZmRjM2JjYmQyV2h02A==: 00:21:08.054 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: ]] 00:21:08.054 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmUwOTU3NzI5YTQ2ZDVmNWFhMWRkMDEzZWM0MzVkZjUxNzkxYjIwM2E3Y2ZhZDU2HHL+Ow==: 00:21:08.054 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:21:08.054 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:08.054 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:08.054 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:08.054 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:08.054 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:08.054 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:08.054 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:08.054 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:08.054 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:08.054 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:08.054 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:08.054 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.054 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.313 nvme0n1 00:21:08.313 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.313 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:21:08.313 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:08.313 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:08.313 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:08.313 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:08.313 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTEyYjM4ZWI3MDVmZGFhNzIzMGIwNzg5ZmRhYzlkYjJszgfv: 00:21:08.313 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: 00:21:08.313 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:08.313 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:08.313 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTEyYjM4ZWI3MDVmZGFhNzIzMGIwNzg5ZmRhYzlkYjJszgfv: 00:21:08.313 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: ]] 00:21:08.313 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmY4MTYyNzAyNzRjYzZkN2ZlZWFmOTk4YjE1NDFmMTQcM6AW: 00:21:08.313 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:21:08.313 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:21:08.313 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:21:08.313 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:08.313 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:08.313 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:08.313 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:08.313 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:21:08.313 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.313 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.313 request: 00:21:08.313 { 00:21:08.313 "name": "nvme0", 00:21:08.313 "dhchap_key": "key2", 00:21:08.313 "dhchap_ctrlr_key": "ckey1", 00:21:08.313 "method": "bdev_nvme_set_keys", 00:21:08.313 "req_id": 1 00:21:08.313 } 00:21:08.313 Got JSON-RPC error response 00:21:08.313 response: 00:21:08.313 { 00:21:08.313 "code": -13, 00:21:08.313 "message": "Permission denied" 00:21:08.313 } 00:21:08.313 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:08.313 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:21:08.313 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:08.313 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:08.313 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:08.313 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:21:08.314 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.314 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.314 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:21:08.314 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.314 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:21:08.314 12:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:21:09.250 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:21:09.250 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:21:09.250 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.250 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.250 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.510 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:21:09.510 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:21:09.510 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:21:09.510 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:21:09.510 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:21:09.510 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:21:09.510 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:09.510 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:21:09.510 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:09.510 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:09.510 rmmod nvme_tcp 00:21:09.510 rmmod nvme_fabrics 00:21:09.510 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:09.510 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:21:09.510 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:21:09.510 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@513 -- # '[' -n 93677 ']' 00:21:09.510 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # killprocess 93677 00:21:09.510 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 93677 ']' 00:21:09.510 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 93677 00:21:09.510 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:21:09.510 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:09.510 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93677 00:21:09.510 killing process with pid 93677 00:21:09.510 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:09.510 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:09.510 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93677' 00:21:09.510 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 93677 00:21:09.510 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 93677 00:21:09.510 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:21:09.510 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:21:09.510 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:21:09.510 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:21:09.510 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-save 00:21:09.510 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:21:09.510 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-restore 00:21:09.510 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:09.510 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:09.510 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:09.510 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:09.770 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:09.770 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:09.770 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:09.770 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:09.770 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:09.770 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:09.770 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:09.770 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:09.770 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:09.770 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:09.770 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:09.770 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:09.770 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:09.770 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:09.770 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:09.770 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:21:09.770 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:21:09.770 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:21:09.770 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:21:09.770 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:21:09.770 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # echo 0 00:21:09.770 12:41:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:09.770 12:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:09.770 12:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:09.770 12:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:09.770 12:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:21:09.770 12:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:21:10.029 12:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@722 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:10.597 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:10.597 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:10.857 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:10.857 12:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.0Rw /tmp/spdk.key-null.p5h /tmp/spdk.key-sha256.Wxq /tmp/spdk.key-sha384.3Ly /tmp/spdk.key-sha512.Ul9 /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:21:10.857 12:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:11.116 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:11.116 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:11.116 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:11.116 00:21:11.116 real 0m35.118s 00:21:11.116 user 0m32.703s 00:21:11.116 sys 0m3.812s 00:21:11.116 ************************************ 00:21:11.116 12:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:11.116 12:41:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.116 END TEST nvmf_auth_host 00:21:11.117 ************************************ 00:21:11.376 12:41:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.377 ************************************ 00:21:11.377 START TEST nvmf_digest 00:21:11.377 ************************************ 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:21:11.377 * Looking for test storage... 00:21:11.377 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lcov --version 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:11.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.377 --rc genhtml_branch_coverage=1 00:21:11.377 --rc genhtml_function_coverage=1 00:21:11.377 --rc genhtml_legend=1 00:21:11.377 --rc geninfo_all_blocks=1 00:21:11.377 --rc geninfo_unexecuted_blocks=1 00:21:11.377 00:21:11.377 ' 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:11.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.377 --rc genhtml_branch_coverage=1 00:21:11.377 --rc genhtml_function_coverage=1 00:21:11.377 --rc genhtml_legend=1 00:21:11.377 --rc geninfo_all_blocks=1 00:21:11.377 --rc geninfo_unexecuted_blocks=1 00:21:11.377 00:21:11.377 ' 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:11.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.377 --rc genhtml_branch_coverage=1 00:21:11.377 --rc genhtml_function_coverage=1 00:21:11.377 --rc genhtml_legend=1 00:21:11.377 --rc geninfo_all_blocks=1 00:21:11.377 --rc geninfo_unexecuted_blocks=1 00:21:11.377 00:21:11.377 ' 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:11.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.377 --rc genhtml_branch_coverage=1 00:21:11.377 --rc genhtml_function_coverage=1 00:21:11.377 --rc genhtml_legend=1 00:21:11.377 --rc geninfo_all_blocks=1 00:21:11.377 --rc geninfo_unexecuted_blocks=1 00:21:11.377 00:21:11.377 ' 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:11.377 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:11.377 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:11.378 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:11.378 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:11.378 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:11.378 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:21:11.378 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:21:11.378 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:21:11.378 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:21:11.378 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:21:11.378 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:21:11.378 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:11.378 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # prepare_net_devs 00:21:11.378 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@434 -- # local -g is_hw=no 00:21:11.378 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # remove_spdk_ns 00:21:11.378 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.378 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:11.378 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:11.378 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:21:11.378 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:21:11.378 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:21:11.378 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:21:11.378 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:21:11.378 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@456 -- # nvmf_veth_init 00:21:11.378 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:11.378 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:11.378 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:11.378 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:11.638 Cannot find device "nvmf_init_br" 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:11.638 Cannot find device "nvmf_init_br2" 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:11.638 Cannot find device "nvmf_tgt_br" 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:11.638 Cannot find device "nvmf_tgt_br2" 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:11.638 Cannot find device "nvmf_init_br" 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:11.638 Cannot find device "nvmf_init_br2" 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:11.638 Cannot find device "nvmf_tgt_br" 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:11.638 Cannot find device "nvmf_tgt_br2" 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:11.638 Cannot find device "nvmf_br" 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:11.638 Cannot find device "nvmf_init_if" 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:11.638 Cannot find device "nvmf_init_if2" 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:11.638 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:11.638 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:11.638 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:11.898 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:11.898 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:11.898 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:11.898 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:11.898 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:11.898 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:11.898 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:11.898 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:11.898 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:11.898 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:11.898 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:11.898 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:11.898 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:11.898 12:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:11.898 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:11.898 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:11.898 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:11.898 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:11.898 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:11.898 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:11.898 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:11.898 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:11.898 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:11.898 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:21:11.898 00:21:11.898 --- 10.0.0.3 ping statistics --- 00:21:11.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.898 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:21:11.898 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:11.898 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:11.898 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:21:11.898 00:21:11.898 --- 10.0.0.4 ping statistics --- 00:21:11.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.898 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:21:11.898 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:11.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:11.899 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:21:11.899 00:21:11.899 --- 10.0.0.1 ping statistics --- 00:21:11.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.899 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:21:11.899 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:11.899 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:11.899 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:21:11.899 00:21:11.899 --- 10.0.0.2 ping statistics --- 00:21:11.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.899 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:21:11.899 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:11.899 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@457 -- # return 0 00:21:11.899 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:21:11.899 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:11.899 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:21:11.899 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:21:11.899 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:11.899 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:21:11.899 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:21:11.899 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:11.899 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:21:11.899 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:21:11.899 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:11.899 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:11.899 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:11.899 ************************************ 00:21:11.899 START TEST nvmf_digest_clean 00:21:11.899 ************************************ 00:21:11.899 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:21:11.899 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:21:11.899 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:21:11.899 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:21:11.899 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:21:11.899 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:21:11.899 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:21:11.899 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:11.899 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:11.899 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # nvmfpid=95321 00:21:11.899 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:11.899 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # waitforlisten 95321 00:21:11.899 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 95321 ']' 00:21:11.899 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:11.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:11.899 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:11.899 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:11.899 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:11.899 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:12.159 [2024-11-19 12:41:17.159440] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:21:12.159 [2024-11-19 12:41:17.159536] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:12.159 [2024-11-19 12:41:17.302922] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.159 [2024-11-19 12:41:17.346856] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:12.159 [2024-11-19 12:41:17.346920] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:12.159 [2024-11-19 12:41:17.346936] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:12.159 [2024-11-19 12:41:17.346946] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:12.159 [2024-11-19 12:41:17.346954] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:12.159 [2024-11-19 12:41:17.346987] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:12.159 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:12.159 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:21:12.159 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:21:12.159 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:12.159 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:12.418 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:12.418 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:21:12.418 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:21:12.418 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:21:12.418 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.418 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:12.418 [2024-11-19 12:41:17.476175] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:12.418 null0 00:21:12.418 [2024-11-19 12:41:17.510660] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:12.418 [2024-11-19 12:41:17.534799] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:12.418 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.418 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:21:12.418 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:12.418 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:12.418 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:21:12.418 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:21:12.418 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:21:12.418 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:12.418 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=95346 00:21:12.418 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 95346 /var/tmp/bperf.sock 00:21:12.418 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:21:12.418 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 95346 ']' 00:21:12.418 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:12.419 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:12.419 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:12.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:12.419 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:12.419 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:12.419 [2024-11-19 12:41:17.598858] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:21:12.419 [2024-11-19 12:41:17.599104] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95346 ] 00:21:12.678 [2024-11-19 12:41:17.743104] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.678 [2024-11-19 12:41:17.785786] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:12.678 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:12.678 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:21:12.678 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:12.678 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:12.678 12:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:12.937 [2024-11-19 12:41:18.096390] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:12.937 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:12.937 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:13.506 nvme0n1 00:21:13.506 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:13.506 12:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:13.506 Running I/O for 2 seconds... 00:21:15.389 17780.00 IOPS, 69.45 MiB/s [2024-11-19T12:41:20.649Z] 17843.50 IOPS, 69.70 MiB/s 00:21:15.389 Latency(us) 00:21:15.389 [2024-11-19T12:41:20.649Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:15.389 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:15.389 nvme0n1 : 2.01 17835.07 69.67 0.00 0.00 7171.80 6613.18 19303.33 00:21:15.389 [2024-11-19T12:41:20.649Z] =================================================================================================================== 00:21:15.389 [2024-11-19T12:41:20.649Z] Total : 17835.07 69.67 0.00 0.00 7171.80 6613.18 19303.33 00:21:15.389 { 00:21:15.389 "results": [ 00:21:15.389 { 00:21:15.389 "job": "nvme0n1", 00:21:15.389 "core_mask": "0x2", 00:21:15.389 "workload": "randread", 00:21:15.389 "status": "finished", 00:21:15.389 "queue_depth": 128, 00:21:15.389 "io_size": 4096, 00:21:15.389 "runtime": 2.008122, 00:21:15.389 "iops": 17835.071773527703, 00:21:15.389 "mibps": 69.66824911534259, 00:21:15.389 "io_failed": 0, 00:21:15.389 "io_timeout": 0, 00:21:15.389 "avg_latency_us": 7171.799949335601, 00:21:15.389 "min_latency_us": 6613.178181818182, 00:21:15.389 "max_latency_us": 19303.33090909091 00:21:15.389 } 00:21:15.389 ], 00:21:15.389 "core_count": 1 00:21:15.389 } 00:21:15.389 12:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:15.389 12:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:15.389 12:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:15.389 12:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:15.389 | select(.opcode=="crc32c") 00:21:15.389 | "\(.module_name) \(.executed)"' 00:21:15.389 12:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:15.649 12:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:15.649 12:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:15.649 12:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:15.649 12:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:15.649 12:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 95346 00:21:15.649 12:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 95346 ']' 00:21:15.649 12:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 95346 00:21:15.649 12:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:21:15.649 12:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:15.649 12:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95346 00:21:15.649 killing process with pid 95346 00:21:15.649 Received shutdown signal, test time was about 2.000000 seconds 00:21:15.649 00:21:15.649 Latency(us) 00:21:15.649 [2024-11-19T12:41:20.909Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:15.649 [2024-11-19T12:41:20.909Z] =================================================================================================================== 00:21:15.649 [2024-11-19T12:41:20.909Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:15.649 12:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:15.649 12:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:15.649 12:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95346' 00:21:15.649 12:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 95346 00:21:15.649 12:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 95346 00:21:15.909 12:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:21:15.909 12:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:15.909 12:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:15.909 12:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:21:15.909 12:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:21:15.909 12:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:21:15.909 12:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:15.909 12:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=95393 00:21:15.909 12:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 95393 /var/tmp/bperf.sock 00:21:15.909 12:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:15.909 12:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 95393 ']' 00:21:15.909 12:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:15.909 12:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:15.909 12:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:15.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:15.909 12:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:15.909 12:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:15.909 [2024-11-19 12:41:21.055075] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:21:15.909 [2024-11-19 12:41:21.055396] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95393 ] 00:21:15.909 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:15.909 Zero copy mechanism will not be used. 00:21:16.168 [2024-11-19 12:41:21.196261] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.168 [2024-11-19 12:41:21.229410] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:16.168 12:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:16.168 12:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:21:16.168 12:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:16.168 12:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:16.168 12:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:16.427 [2024-11-19 12:41:21.500116] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:16.427 12:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:16.427 12:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:16.686 nvme0n1 00:21:16.686 12:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:16.686 12:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:16.945 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:16.945 Zero copy mechanism will not be used. 00:21:16.945 Running I/O for 2 seconds... 00:21:18.819 8816.00 IOPS, 1102.00 MiB/s [2024-11-19T12:41:24.079Z] 8840.00 IOPS, 1105.00 MiB/s 00:21:18.819 Latency(us) 00:21:18.819 [2024-11-19T12:41:24.079Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.819 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:18.819 nvme0n1 : 2.00 8836.06 1104.51 0.00 0.00 1808.02 1616.06 5123.72 00:21:18.819 [2024-11-19T12:41:24.079Z] =================================================================================================================== 00:21:18.819 [2024-11-19T12:41:24.079Z] Total : 8836.06 1104.51 0.00 0.00 1808.02 1616.06 5123.72 00:21:18.819 { 00:21:18.819 "results": [ 00:21:18.819 { 00:21:18.819 "job": "nvme0n1", 00:21:18.819 "core_mask": "0x2", 00:21:18.819 "workload": "randread", 00:21:18.819 "status": "finished", 00:21:18.819 "queue_depth": 16, 00:21:18.819 "io_size": 131072, 00:21:18.819 "runtime": 2.002702, 00:21:18.819 "iops": 8836.062479590073, 00:21:18.819 "mibps": 1104.5078099487591, 00:21:18.819 "io_failed": 0, 00:21:18.819 "io_timeout": 0, 00:21:18.819 "avg_latency_us": 1808.0190432352458, 00:21:18.819 "min_latency_us": 1616.0581818181818, 00:21:18.819 "max_latency_us": 5123.723636363637 00:21:18.819 } 00:21:18.819 ], 00:21:18.819 "core_count": 1 00:21:18.819 } 00:21:18.819 12:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:18.819 12:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:18.819 12:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:18.819 12:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:18.819 | select(.opcode=="crc32c") 00:21:18.819 | "\(.module_name) \(.executed)"' 00:21:18.819 12:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:19.079 12:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:19.079 12:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:19.079 12:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:19.079 12:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:19.079 12:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 95393 00:21:19.079 12:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 95393 ']' 00:21:19.079 12:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 95393 00:21:19.079 12:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:21:19.079 12:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:19.079 12:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95393 00:21:19.079 killing process with pid 95393 00:21:19.079 Received shutdown signal, test time was about 2.000000 seconds 00:21:19.079 00:21:19.079 Latency(us) 00:21:19.079 [2024-11-19T12:41:24.339Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:19.079 [2024-11-19T12:41:24.339Z] =================================================================================================================== 00:21:19.079 [2024-11-19T12:41:24.339Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:19.079 12:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:19.079 12:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:19.079 12:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95393' 00:21:19.079 12:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 95393 00:21:19.079 12:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 95393 00:21:19.338 12:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:21:19.338 12:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:19.338 12:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:19.338 12:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:21:19.338 12:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:21:19.338 12:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:21:19.338 12:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:19.338 12:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=95440 00:21:19.338 12:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:21:19.338 12:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 95440 /var/tmp/bperf.sock 00:21:19.338 12:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 95440 ']' 00:21:19.338 12:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:19.338 12:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:19.338 12:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:19.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:19.338 12:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:19.338 12:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:19.338 [2024-11-19 12:41:24.502110] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:21:19.339 [2024-11-19 12:41:24.502385] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95440 ] 00:21:19.597 [2024-11-19 12:41:24.641167] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.597 [2024-11-19 12:41:24.673499] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:19.598 12:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:19.598 12:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:21:19.598 12:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:19.598 12:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:19.598 12:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:19.857 [2024-11-19 12:41:24.992634] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:19.857 12:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:19.857 12:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:20.116 nvme0n1 00:21:20.116 12:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:20.116 12:41:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:20.375 Running I/O for 2 seconds... 00:21:22.249 19305.00 IOPS, 75.41 MiB/s [2024-11-19T12:41:27.509Z] 19304.50 IOPS, 75.41 MiB/s 00:21:22.249 Latency(us) 00:21:22.249 [2024-11-19T12:41:27.509Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:22.249 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:22.249 nvme0n1 : 2.00 19337.78 75.54 0.00 0.00 6613.75 3619.37 14775.39 00:21:22.249 [2024-11-19T12:41:27.509Z] =================================================================================================================== 00:21:22.249 [2024-11-19T12:41:27.509Z] Total : 19337.78 75.54 0.00 0.00 6613.75 3619.37 14775.39 00:21:22.249 { 00:21:22.249 "results": [ 00:21:22.249 { 00:21:22.249 "job": "nvme0n1", 00:21:22.249 "core_mask": "0x2", 00:21:22.249 "workload": "randwrite", 00:21:22.249 "status": "finished", 00:21:22.249 "queue_depth": 128, 00:21:22.249 "io_size": 4096, 00:21:22.249 "runtime": 2.003177, 00:21:22.249 "iops": 19337.781933398797, 00:21:22.249 "mibps": 75.53821067733905, 00:21:22.249 "io_failed": 0, 00:21:22.249 "io_timeout": 0, 00:21:22.249 "avg_latency_us": 6613.746287012416, 00:21:22.249 "min_latency_us": 3619.3745454545456, 00:21:22.249 "max_latency_us": 14775.389090909091 00:21:22.249 } 00:21:22.249 ], 00:21:22.249 "core_count": 1 00:21:22.249 } 00:21:22.249 12:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:22.249 12:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:22.249 12:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:22.249 12:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:22.249 12:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:22.249 | select(.opcode=="crc32c") 00:21:22.249 | "\(.module_name) \(.executed)"' 00:21:22.818 12:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:22.818 12:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:22.818 12:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:22.818 12:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:22.818 12:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 95440 00:21:22.818 12:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 95440 ']' 00:21:22.818 12:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 95440 00:21:22.818 12:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:21:22.818 12:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:22.818 12:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95440 00:21:22.818 killing process with pid 95440 00:21:22.818 Received shutdown signal, test time was about 2.000000 seconds 00:21:22.818 00:21:22.818 Latency(us) 00:21:22.818 [2024-11-19T12:41:28.078Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:22.818 [2024-11-19T12:41:28.078Z] =================================================================================================================== 00:21:22.818 [2024-11-19T12:41:28.078Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:22.818 12:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:22.818 12:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:22.818 12:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95440' 00:21:22.818 12:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 95440 00:21:22.818 12:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 95440 00:21:22.818 12:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:21:22.818 12:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:22.818 12:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:22.818 12:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:21:22.818 12:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:21:22.818 12:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:21:22.818 12:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:22.818 12:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=95494 00:21:22.818 12:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 95494 /var/tmp/bperf.sock 00:21:22.818 12:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:22.818 12:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 95494 ']' 00:21:22.818 12:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:22.818 12:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:22.818 12:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:22.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:22.818 12:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:22.818 12:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:22.818 [2024-11-19 12:41:28.029370] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:21:22.818 [2024-11-19 12:41:28.029657] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95494 ] 00:21:22.818 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:22.818 Zero copy mechanism will not be used. 00:21:23.078 [2024-11-19 12:41:28.165590] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.078 [2024-11-19 12:41:28.198778] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:24.012 12:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:24.012 12:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:21:24.012 12:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:24.012 12:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:24.012 12:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:24.271 [2024-11-19 12:41:29.274418] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:24.271 12:41:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:24.271 12:41:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:24.530 nvme0n1 00:21:24.530 12:41:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:24.530 12:41:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:24.530 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:24.530 Zero copy mechanism will not be used. 00:21:24.530 Running I/O for 2 seconds... 00:21:26.843 7088.00 IOPS, 886.00 MiB/s [2024-11-19T12:41:32.103Z] 7076.50 IOPS, 884.56 MiB/s 00:21:26.843 Latency(us) 00:21:26.843 [2024-11-19T12:41:32.103Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.843 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:26.843 nvme0n1 : 2.00 7074.06 884.26 0.00 0.00 2256.48 2010.76 8162.21 00:21:26.843 [2024-11-19T12:41:32.103Z] =================================================================================================================== 00:21:26.843 [2024-11-19T12:41:32.103Z] Total : 7074.06 884.26 0.00 0.00 2256.48 2010.76 8162.21 00:21:26.843 { 00:21:26.844 "results": [ 00:21:26.844 { 00:21:26.844 "job": "nvme0n1", 00:21:26.844 "core_mask": "0x2", 00:21:26.844 "workload": "randwrite", 00:21:26.844 "status": "finished", 00:21:26.844 "queue_depth": 16, 00:21:26.844 "io_size": 131072, 00:21:26.844 "runtime": 2.00281, 00:21:26.844 "iops": 7074.0609443731555, 00:21:26.844 "mibps": 884.2576180466444, 00:21:26.844 "io_failed": 0, 00:21:26.844 "io_timeout": 0, 00:21:26.844 "avg_latency_us": 2256.482636414968, 00:21:26.844 "min_latency_us": 2010.7636363636364, 00:21:26.844 "max_latency_us": 8162.210909090909 00:21:26.844 } 00:21:26.844 ], 00:21:26.844 "core_count": 1 00:21:26.844 } 00:21:26.844 12:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:26.844 12:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:26.844 12:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:26.844 12:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:26.844 | select(.opcode=="crc32c") 00:21:26.844 | "\(.module_name) \(.executed)"' 00:21:26.844 12:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:26.844 12:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:26.844 12:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:26.844 12:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:26.844 12:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:26.844 12:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 95494 00:21:26.844 12:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 95494 ']' 00:21:26.844 12:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 95494 00:21:26.844 12:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:21:26.844 12:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:26.844 12:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95494 00:21:26.844 killing process with pid 95494 00:21:26.844 Received shutdown signal, test time was about 2.000000 seconds 00:21:26.844 00:21:26.844 Latency(us) 00:21:26.844 [2024-11-19T12:41:32.104Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.844 [2024-11-19T12:41:32.104Z] =================================================================================================================== 00:21:26.844 [2024-11-19T12:41:32.104Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:26.844 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:26.844 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:26.844 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95494' 00:21:26.844 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 95494 00:21:26.844 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 95494 00:21:27.103 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 95321 00:21:27.103 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 95321 ']' 00:21:27.103 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 95321 00:21:27.103 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:21:27.103 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:27.103 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95321 00:21:27.103 killing process with pid 95321 00:21:27.103 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:27.103 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:27.103 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95321' 00:21:27.103 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 95321 00:21:27.103 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 95321 00:21:27.103 00:21:27.103 real 0m15.221s 00:21:27.103 user 0m29.733s 00:21:27.103 sys 0m4.349s 00:21:27.103 ************************************ 00:21:27.103 END TEST nvmf_digest_clean 00:21:27.103 ************************************ 00:21:27.103 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:27.103 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:27.103 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:21:27.103 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:27.103 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:27.103 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:27.402 ************************************ 00:21:27.402 START TEST nvmf_digest_error 00:21:27.402 ************************************ 00:21:27.402 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:21:27.402 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:21:27.402 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:21:27.402 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:27.402 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:27.402 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # nvmfpid=95578 00:21:27.402 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # waitforlisten 95578 00:21:27.402 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 95578 ']' 00:21:27.402 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:27.402 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:27.402 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:27.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:27.402 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:27.402 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:27.402 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:27.402 [2024-11-19 12:41:32.433103] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:21:27.402 [2024-11-19 12:41:32.433373] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:27.402 [2024-11-19 12:41:32.572482] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.402 [2024-11-19 12:41:32.606316] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:27.402 [2024-11-19 12:41:32.606370] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:27.402 [2024-11-19 12:41:32.606398] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:27.402 [2024-11-19 12:41:32.606405] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:27.402 [2024-11-19 12:41:32.606411] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:27.402 [2024-11-19 12:41:32.606435] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.672 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:27.672 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:21:27.672 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:21:27.672 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:27.672 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:27.672 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:27.672 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:21:27.672 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.672 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:27.672 [2024-11-19 12:41:32.702794] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:21:27.672 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.672 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:21:27.672 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:21:27.672 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.672 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:27.672 [2024-11-19 12:41:32.738190] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:27.672 null0 00:21:27.672 [2024-11-19 12:41:32.769336] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:27.673 [2024-11-19 12:41:32.793417] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:27.673 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.673 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:21:27.673 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:27.673 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:21:27.673 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:21:27.673 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:21:27.673 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:21:27.673 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95603 00:21:27.673 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95603 /var/tmp/bperf.sock 00:21:27.673 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 95603 ']' 00:21:27.673 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:27.673 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:27.673 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:27.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:27.673 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:27.673 12:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:27.673 [2024-11-19 12:41:32.842425] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:21:27.673 [2024-11-19 12:41:32.842655] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95603 ] 00:21:27.939 [2024-11-19 12:41:32.976298] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.939 [2024-11-19 12:41:33.009525] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:27.939 [2024-11-19 12:41:33.037340] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:27.939 12:41:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:27.939 12:41:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:21:27.939 12:41:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:27.939 12:41:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:28.197 12:41:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:28.197 12:41:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.198 12:41:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:28.198 12:41:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.198 12:41:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:28.198 12:41:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:28.456 nvme0n1 00:21:28.456 12:41:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:28.456 12:41:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.456 12:41:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:28.456 12:41:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.456 12:41:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:28.456 12:41:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:28.714 Running I/O for 2 seconds... 00:21:28.714 [2024-11-19 12:41:33.824013] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:28.714 [2024-11-19 12:41:33.824103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.714 [2024-11-19 12:41:33.824133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.714 [2024-11-19 12:41:33.839021] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:28.714 [2024-11-19 12:41:33.839056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.714 [2024-11-19 12:41:33.839085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.714 [2024-11-19 12:41:33.853363] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:28.714 [2024-11-19 12:41:33.853398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.714 [2024-11-19 12:41:33.853426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.714 [2024-11-19 12:41:33.867876] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:28.714 [2024-11-19 12:41:33.867912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.714 [2024-11-19 12:41:33.867941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.714 [2024-11-19 12:41:33.882098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:28.714 [2024-11-19 12:41:33.882131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.714 [2024-11-19 12:41:33.882159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.714 [2024-11-19 12:41:33.896386] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:28.714 [2024-11-19 12:41:33.896421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.714 [2024-11-19 12:41:33.896450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.714 [2024-11-19 12:41:33.910457] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:28.714 [2024-11-19 12:41:33.910490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.714 [2024-11-19 12:41:33.910518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.714 [2024-11-19 12:41:33.924585] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:28.714 [2024-11-19 12:41:33.924618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.714 [2024-11-19 12:41:33.924646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.714 [2024-11-19 12:41:33.938642] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:28.714 [2024-11-19 12:41:33.938700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.714 [2024-11-19 12:41:33.938713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.714 [2024-11-19 12:41:33.952951] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:28.714 [2024-11-19 12:41:33.952984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.714 [2024-11-19 12:41:33.953013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.715 [2024-11-19 12:41:33.966947] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:28.715 [2024-11-19 12:41:33.967139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.715 [2024-11-19 12:41:33.967157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.974 [2024-11-19 12:41:33.982584] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:28.974 [2024-11-19 12:41:33.982779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.974 [2024-11-19 12:41:33.982795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.974 [2024-11-19 12:41:33.997007] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:28.974 [2024-11-19 12:41:33.997041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.974 [2024-11-19 12:41:33.997070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.974 [2024-11-19 12:41:34.011098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:28.974 [2024-11-19 12:41:34.011131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.974 [2024-11-19 12:41:34.011159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.974 [2024-11-19 12:41:34.025335] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:28.974 [2024-11-19 12:41:34.025369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:14956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.974 [2024-11-19 12:41:34.025398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.974 [2024-11-19 12:41:34.039398] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:28.974 [2024-11-19 12:41:34.039559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.974 [2024-11-19 12:41:34.039592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.974 [2024-11-19 12:41:34.053771] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:28.974 [2024-11-19 12:41:34.053952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.974 [2024-11-19 12:41:34.053968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.974 [2024-11-19 12:41:34.068405] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:28.974 [2024-11-19 12:41:34.068441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.974 [2024-11-19 12:41:34.068469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.974 [2024-11-19 12:41:34.082593] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:28.974 [2024-11-19 12:41:34.082627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.974 [2024-11-19 12:41:34.082655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.974 [2024-11-19 12:41:34.096971] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:28.974 [2024-11-19 12:41:34.097003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.974 [2024-11-19 12:41:34.097031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.974 [2024-11-19 12:41:34.111283] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:28.974 [2024-11-19 12:41:34.111471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.974 [2024-11-19 12:41:34.111504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.974 [2024-11-19 12:41:34.125851] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:28.974 [2024-11-19 12:41:34.125885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.974 [2024-11-19 12:41:34.125913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.974 [2024-11-19 12:41:34.140094] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:28.974 [2024-11-19 12:41:34.140128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.974 [2024-11-19 12:41:34.140157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.975 [2024-11-19 12:41:34.154288] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:28.975 [2024-11-19 12:41:34.154321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.975 [2024-11-19 12:41:34.154350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.975 [2024-11-19 12:41:34.168628] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:28.975 [2024-11-19 12:41:34.168663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.975 [2024-11-19 12:41:34.168703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.975 [2024-11-19 12:41:34.182626] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:28.975 [2024-11-19 12:41:34.182660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.975 [2024-11-19 12:41:34.182701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.975 [2024-11-19 12:41:34.196961] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:28.975 [2024-11-19 12:41:34.196993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.975 [2024-11-19 12:41:34.197022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.975 [2024-11-19 12:41:34.210916] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:28.975 [2024-11-19 12:41:34.211098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.975 [2024-11-19 12:41:34.211113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:28.975 [2024-11-19 12:41:34.225209] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:28.975 [2024-11-19 12:41:34.225243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:28.975 [2024-11-19 12:41:34.225271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.234 [2024-11-19 12:41:34.240690] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.234 [2024-11-19 12:41:34.240722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.234 [2024-11-19 12:41:34.240750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.234 [2024-11-19 12:41:34.254865] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.234 [2024-11-19 12:41:34.254898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.234 [2024-11-19 12:41:34.254926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.234 [2024-11-19 12:41:34.268927] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.234 [2024-11-19 12:41:34.268960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.234 [2024-11-19 12:41:34.268988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.234 [2024-11-19 12:41:34.282930] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.234 [2024-11-19 12:41:34.283112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:25584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.234 [2024-11-19 12:41:34.283128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.234 [2024-11-19 12:41:34.297351] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.234 [2024-11-19 12:41:34.297385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.234 [2024-11-19 12:41:34.297414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.234 [2024-11-19 12:41:34.311430] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.234 [2024-11-19 12:41:34.311465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:15910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.234 [2024-11-19 12:41:34.311494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.234 [2024-11-19 12:41:34.325733] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.234 [2024-11-19 12:41:34.325934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.234 [2024-11-19 12:41:34.326033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.234 [2024-11-19 12:41:34.340192] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.234 [2024-11-19 12:41:34.340227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.234 [2024-11-19 12:41:34.340256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.234 [2024-11-19 12:41:34.354354] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.234 [2024-11-19 12:41:34.354387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.234 [2024-11-19 12:41:34.354414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.234 [2024-11-19 12:41:34.368468] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.234 [2024-11-19 12:41:34.368502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.234 [2024-11-19 12:41:34.368531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.234 [2024-11-19 12:41:34.382594] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.235 [2024-11-19 12:41:34.382629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.235 [2024-11-19 12:41:34.382657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.235 [2024-11-19 12:41:34.396629] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.235 [2024-11-19 12:41:34.396662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:16357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.235 [2024-11-19 12:41:34.396719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.235 [2024-11-19 12:41:34.410650] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.235 [2024-11-19 12:41:34.410709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.235 [2024-11-19 12:41:34.410721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.235 [2024-11-19 12:41:34.424776] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.235 [2024-11-19 12:41:34.424808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.235 [2024-11-19 12:41:34.424836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.235 [2024-11-19 12:41:34.438779] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.235 [2024-11-19 12:41:34.438945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.235 [2024-11-19 12:41:34.438977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.235 [2024-11-19 12:41:34.454000] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.235 [2024-11-19 12:41:34.454223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.235 [2024-11-19 12:41:34.454240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.235 [2024-11-19 12:41:34.470882] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.235 [2024-11-19 12:41:34.470921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.235 [2024-11-19 12:41:34.470935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.235 [2024-11-19 12:41:34.487901] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.235 [2024-11-19 12:41:34.487972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.235 [2024-11-19 12:41:34.487986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.495 [2024-11-19 12:41:34.504572] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.495 [2024-11-19 12:41:34.504611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.495 [2024-11-19 12:41:34.504625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.495 [2024-11-19 12:41:34.519647] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.495 [2024-11-19 12:41:34.519876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.495 [2024-11-19 12:41:34.519892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.495 [2024-11-19 12:41:34.534996] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.495 [2024-11-19 12:41:34.535182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.495 [2024-11-19 12:41:34.535197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.495 [2024-11-19 12:41:34.550563] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.495 [2024-11-19 12:41:34.550599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.495 [2024-11-19 12:41:34.550611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.495 [2024-11-19 12:41:34.565777] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.495 [2024-11-19 12:41:34.565811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.495 [2024-11-19 12:41:34.565823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.495 [2024-11-19 12:41:34.580444] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.495 [2024-11-19 12:41:34.580478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.495 [2024-11-19 12:41:34.580490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.495 [2024-11-19 12:41:34.595531] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.495 [2024-11-19 12:41:34.595747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.495 [2024-11-19 12:41:34.595763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.495 [2024-11-19 12:41:34.610759] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.495 [2024-11-19 12:41:34.610795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.495 [2024-11-19 12:41:34.610808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.495 [2024-11-19 12:41:34.625930] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.495 [2024-11-19 12:41:34.626099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.495 [2024-11-19 12:41:34.626115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.495 [2024-11-19 12:41:34.641110] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.495 [2024-11-19 12:41:34.641279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.495 [2024-11-19 12:41:34.641295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.495 [2024-11-19 12:41:34.656292] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.495 [2024-11-19 12:41:34.656327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.495 [2024-11-19 12:41:34.656339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.495 [2024-11-19 12:41:34.671182] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.495 [2024-11-19 12:41:34.671375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.495 [2024-11-19 12:41:34.671392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.495 [2024-11-19 12:41:34.685951] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.495 [2024-11-19 12:41:34.686133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.495 [2024-11-19 12:41:34.686279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.495 [2024-11-19 12:41:34.700816] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.495 [2024-11-19 12:41:34.701000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.495 [2024-11-19 12:41:34.701147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.495 [2024-11-19 12:41:34.715633] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.495 [2024-11-19 12:41:34.715864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.495 [2024-11-19 12:41:34.715991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.495 [2024-11-19 12:41:34.730242] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.495 [2024-11-19 12:41:34.730441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.495 [2024-11-19 12:41:34.730572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.495 [2024-11-19 12:41:34.751722] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.495 [2024-11-19 12:41:34.751945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.755 [2024-11-19 12:41:34.752188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.755 [2024-11-19 12:41:34.767014] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.755 [2024-11-19 12:41:34.767196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.755 [2024-11-19 12:41:34.767380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.755 [2024-11-19 12:41:34.781865] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.755 [2024-11-19 12:41:34.782065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.755 [2024-11-19 12:41:34.782240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.756 [2024-11-19 12:41:34.797741] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.756 [2024-11-19 12:41:34.797923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.756 [2024-11-19 12:41:34.798075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.756 17079.00 IOPS, 66.71 MiB/s [2024-11-19T12:41:35.016Z] [2024-11-19 12:41:34.813817] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.756 [2024-11-19 12:41:34.813855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.756 [2024-11-19 12:41:34.813869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.756 [2024-11-19 12:41:34.831145] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.756 [2024-11-19 12:41:34.831179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.756 [2024-11-19 12:41:34.831207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.756 [2024-11-19 12:41:34.846924] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.756 [2024-11-19 12:41:34.846961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.756 [2024-11-19 12:41:34.846974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.756 [2024-11-19 12:41:34.861712] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.756 [2024-11-19 12:41:34.861745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.756 [2024-11-19 12:41:34.861773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.756 [2024-11-19 12:41:34.876522] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.756 [2024-11-19 12:41:34.876556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.756 [2024-11-19 12:41:34.876585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.756 [2024-11-19 12:41:34.890698] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.756 [2024-11-19 12:41:34.890760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.756 [2024-11-19 12:41:34.890772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.756 [2024-11-19 12:41:34.904981] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.756 [2024-11-19 12:41:34.905147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:25271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.756 [2024-11-19 12:41:34.905180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.756 [2024-11-19 12:41:34.919256] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.756 [2024-11-19 12:41:34.919465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.756 [2024-11-19 12:41:34.919482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.756 [2024-11-19 12:41:34.933564] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.756 [2024-11-19 12:41:34.933598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.756 [2024-11-19 12:41:34.933626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.756 [2024-11-19 12:41:34.947850] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.756 [2024-11-19 12:41:34.947883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.756 [2024-11-19 12:41:34.947912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.756 [2024-11-19 12:41:34.961890] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.756 [2024-11-19 12:41:34.961924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.756 [2024-11-19 12:41:34.961953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.756 [2024-11-19 12:41:34.975997] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.756 [2024-11-19 12:41:34.976030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.756 [2024-11-19 12:41:34.976058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.756 [2024-11-19 12:41:34.990050] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.756 [2024-11-19 12:41:34.990100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.756 [2024-11-19 12:41:34.990129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.756 [2024-11-19 12:41:35.004491] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:29.756 [2024-11-19 12:41:35.004524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.756 [2024-11-19 12:41:35.004553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.016 [2024-11-19 12:41:35.020275] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.016 [2024-11-19 12:41:35.020308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.016 [2024-11-19 12:41:35.020337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.016 [2024-11-19 12:41:35.034501] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.016 [2024-11-19 12:41:35.034534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:25332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.016 [2024-11-19 12:41:35.034562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.016 [2024-11-19 12:41:35.048699] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.016 [2024-11-19 12:41:35.048733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.016 [2024-11-19 12:41:35.048761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.016 [2024-11-19 12:41:35.062730] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.016 [2024-11-19 12:41:35.062762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.016 [2024-11-19 12:41:35.062791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.016 [2024-11-19 12:41:35.077015] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.016 [2024-11-19 12:41:35.077048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.016 [2024-11-19 12:41:35.077076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.016 [2024-11-19 12:41:35.091011] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.016 [2024-11-19 12:41:35.091043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.016 [2024-11-19 12:41:35.091072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.016 [2024-11-19 12:41:35.105326] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.016 [2024-11-19 12:41:35.105358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:23998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.016 [2024-11-19 12:41:35.105387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.016 [2024-11-19 12:41:35.119551] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.016 [2024-11-19 12:41:35.119587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.016 [2024-11-19 12:41:35.119599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.016 [2024-11-19 12:41:35.133701] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.016 [2024-11-19 12:41:35.133735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.016 [2024-11-19 12:41:35.133764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.016 [2024-11-19 12:41:35.147864] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.016 [2024-11-19 12:41:35.147896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.016 [2024-11-19 12:41:35.147924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.016 [2024-11-19 12:41:35.162116] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.016 [2024-11-19 12:41:35.162148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.016 [2024-11-19 12:41:35.162177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.016 [2024-11-19 12:41:35.176118] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.016 [2024-11-19 12:41:35.176151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.016 [2024-11-19 12:41:35.176179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.016 [2024-11-19 12:41:35.190177] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.016 [2024-11-19 12:41:35.190225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.016 [2024-11-19 12:41:35.190254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.016 [2024-11-19 12:41:35.204407] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.016 [2024-11-19 12:41:35.204439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.016 [2024-11-19 12:41:35.204468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.016 [2024-11-19 12:41:35.218633] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.016 [2024-11-19 12:41:35.218693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.016 [2024-11-19 12:41:35.218706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.016 [2024-11-19 12:41:35.232761] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.016 [2024-11-19 12:41:35.232793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.016 [2024-11-19 12:41:35.232821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.016 [2024-11-19 12:41:35.246918] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.016 [2024-11-19 12:41:35.247084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.016 [2024-11-19 12:41:35.247116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.016 [2024-11-19 12:41:35.261322] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.017 [2024-11-19 12:41:35.261355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.017 [2024-11-19 12:41:35.261383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.276 [2024-11-19 12:41:35.276735] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.277 [2024-11-19 12:41:35.276799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.277 [2024-11-19 12:41:35.276812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.277 [2024-11-19 12:41:35.290929] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.277 [2024-11-19 12:41:35.291095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.277 [2024-11-19 12:41:35.291127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.277 [2024-11-19 12:41:35.305342] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.277 [2024-11-19 12:41:35.305376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.277 [2024-11-19 12:41:35.305405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.277 [2024-11-19 12:41:35.319532] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.277 [2024-11-19 12:41:35.319569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.277 [2024-11-19 12:41:35.319598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.277 [2024-11-19 12:41:35.333690] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.277 [2024-11-19 12:41:35.333725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.277 [2024-11-19 12:41:35.333753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.277 [2024-11-19 12:41:35.347933] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.277 [2024-11-19 12:41:35.347966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.277 [2024-11-19 12:41:35.347997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.277 [2024-11-19 12:41:35.361942] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.277 [2024-11-19 12:41:35.361974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.277 [2024-11-19 12:41:35.362002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.277 [2024-11-19 12:41:35.376001] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.277 [2024-11-19 12:41:35.376034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.277 [2024-11-19 12:41:35.376063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.277 [2024-11-19 12:41:35.389961] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.277 [2024-11-19 12:41:35.389993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.277 [2024-11-19 12:41:35.390021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.277 [2024-11-19 12:41:35.404153] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.277 [2024-11-19 12:41:35.404185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.277 [2024-11-19 12:41:35.404213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.277 [2024-11-19 12:41:35.418191] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.277 [2024-11-19 12:41:35.418224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.277 [2024-11-19 12:41:35.418251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.277 [2024-11-19 12:41:35.432275] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.277 [2024-11-19 12:41:35.432308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.277 [2024-11-19 12:41:35.432336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.277 [2024-11-19 12:41:35.446383] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.277 [2024-11-19 12:41:35.446416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.277 [2024-11-19 12:41:35.446444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.277 [2024-11-19 12:41:35.460710] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.277 [2024-11-19 12:41:35.460742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.277 [2024-11-19 12:41:35.460769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.277 [2024-11-19 12:41:35.474761] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.277 [2024-11-19 12:41:35.474916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.277 [2024-11-19 12:41:35.474949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.277 [2024-11-19 12:41:35.489197] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.277 [2024-11-19 12:41:35.489231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.277 [2024-11-19 12:41:35.489259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.277 [2024-11-19 12:41:35.503509] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.277 [2024-11-19 12:41:35.503545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.277 [2024-11-19 12:41:35.503573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.277 [2024-11-19 12:41:35.517756] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.277 [2024-11-19 12:41:35.517919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.277 [2024-11-19 12:41:35.517950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.277 [2024-11-19 12:41:35.532733] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.277 [2024-11-19 12:41:35.532795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.277 [2024-11-19 12:41:35.532807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.537 [2024-11-19 12:41:35.547731] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.537 [2024-11-19 12:41:35.547763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.537 [2024-11-19 12:41:35.547790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.537 [2024-11-19 12:41:35.561804] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.537 [2024-11-19 12:41:35.561836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.537 [2024-11-19 12:41:35.561864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.537 [2024-11-19 12:41:35.575936] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.537 [2024-11-19 12:41:35.575968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.537 [2024-11-19 12:41:35.575995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.537 [2024-11-19 12:41:35.590019] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.537 [2024-11-19 12:41:35.590052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.537 [2024-11-19 12:41:35.590080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.537 [2024-11-19 12:41:35.604302] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.537 [2024-11-19 12:41:35.604334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.537 [2024-11-19 12:41:35.604363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.537 [2024-11-19 12:41:35.618635] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.537 [2024-11-19 12:41:35.618693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.537 [2024-11-19 12:41:35.618722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.537 [2024-11-19 12:41:35.632788] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.537 [2024-11-19 12:41:35.632820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.537 [2024-11-19 12:41:35.632849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.537 [2024-11-19 12:41:35.646873] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.537 [2024-11-19 12:41:35.647027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.537 [2024-11-19 12:41:35.647058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.537 [2024-11-19 12:41:35.661532] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.537 [2024-11-19 12:41:35.661566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.537 [2024-11-19 12:41:35.661594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.537 [2024-11-19 12:41:35.683179] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.537 [2024-11-19 12:41:35.683373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.537 [2024-11-19 12:41:35.683391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.537 [2024-11-19 12:41:35.700417] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.537 [2024-11-19 12:41:35.700452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.537 [2024-11-19 12:41:35.700464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.538 [2024-11-19 12:41:35.715985] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.538 [2024-11-19 12:41:35.716020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.538 [2024-11-19 12:41:35.716032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.538 [2024-11-19 12:41:35.731060] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.538 [2024-11-19 12:41:35.731112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.538 [2024-11-19 12:41:35.731124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.538 [2024-11-19 12:41:35.746241] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.538 [2024-11-19 12:41:35.746275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.538 [2024-11-19 12:41:35.746287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.538 [2024-11-19 12:41:35.761618] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.538 [2024-11-19 12:41:35.761653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.538 [2024-11-19 12:41:35.761677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.538 [2024-11-19 12:41:35.776848] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.538 [2024-11-19 12:41:35.777021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.538 [2024-11-19 12:41:35.777038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.538 [2024-11-19 12:41:35.793121] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xefa5a0) 00:21:30.538 [2024-11-19 12:41:35.793158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.538 [2024-11-19 12:41:35.793171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.797 17268.00 IOPS, 67.45 MiB/s 00:21:30.797 Latency(us) 00:21:30.797 [2024-11-19T12:41:36.057Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.797 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:30.797 nvme0n1 : 2.01 17271.61 67.47 0.00 0.00 7406.05 6702.55 29193.31 00:21:30.797 [2024-11-19T12:41:36.057Z] =================================================================================================================== 00:21:30.797 [2024-11-19T12:41:36.057Z] Total : 17271.61 67.47 0.00 0.00 7406.05 6702.55 29193.31 00:21:30.797 { 00:21:30.797 "results": [ 00:21:30.797 { 00:21:30.797 "job": "nvme0n1", 00:21:30.797 "core_mask": "0x2", 00:21:30.797 "workload": "randread", 00:21:30.797 "status": "finished", 00:21:30.797 "queue_depth": 128, 00:21:30.797 "io_size": 4096, 00:21:30.797 "runtime": 2.006993, 00:21:30.797 "iops": 17271.609816277385, 00:21:30.797 "mibps": 67.46722584483354, 00:21:30.797 "io_failed": 0, 00:21:30.797 "io_timeout": 0, 00:21:30.797 "avg_latency_us": 7406.050620397373, 00:21:30.797 "min_latency_us": 6702.545454545455, 00:21:30.797 "max_latency_us": 29193.30909090909 00:21:30.797 } 00:21:30.797 ], 00:21:30.797 "core_count": 1 00:21:30.797 } 00:21:30.797 12:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:30.797 12:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:30.797 | .driver_specific 00:21:30.797 | .nvme_error 00:21:30.797 | .status_code 00:21:30.797 | .command_transient_transport_error' 00:21:30.797 12:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:30.797 12:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:31.057 12:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 135 > 0 )) 00:21:31.057 12:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95603 00:21:31.057 12:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 95603 ']' 00:21:31.057 12:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 95603 00:21:31.057 12:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:21:31.057 12:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:31.057 12:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95603 00:21:31.057 killing process with pid 95603 00:21:31.057 Received shutdown signal, test time was about 2.000000 seconds 00:21:31.057 00:21:31.057 Latency(us) 00:21:31.057 [2024-11-19T12:41:36.317Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.057 [2024-11-19T12:41:36.317Z] =================================================================================================================== 00:21:31.057 [2024-11-19T12:41:36.317Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:31.057 12:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:31.057 12:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:31.057 12:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95603' 00:21:31.057 12:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 95603 00:21:31.057 12:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 95603 00:21:31.057 12:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:21:31.057 12:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:31.057 12:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:21:31.057 12:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:21:31.057 12:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:21:31.057 12:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95650 00:21:31.057 12:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:21:31.057 12:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95650 /var/tmp/bperf.sock 00:21:31.057 12:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 95650 ']' 00:21:31.057 12:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:31.057 12:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:31.057 12:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:31.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:31.057 12:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:31.057 12:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:31.057 [2024-11-19 12:41:36.297858] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:21:31.057 [2024-11-19 12:41:36.298152] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-aI/O size of 131072 is greater than zero copy threshold (65536). 00:21:31.057 Zero copy mechanism will not be used. 00:21:31.057 llocations --file-prefix=spdk_pid95650 ] 00:21:31.316 [2024-11-19 12:41:36.441318] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.316 [2024-11-19 12:41:36.474015] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:31.316 [2024-11-19 12:41:36.502919] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:32.253 12:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:32.253 12:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:21:32.253 12:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:32.253 12:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:32.253 12:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:32.253 12:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.253 12:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:32.253 12:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.253 12:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:32.253 12:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:32.822 nvme0n1 00:21:32.822 12:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:32.822 12:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.822 12:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:32.823 12:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.823 12:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:32.823 12:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:32.823 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:32.823 Zero copy mechanism will not be used. 00:21:32.823 Running I/O for 2 seconds... 00:21:32.823 [2024-11-19 12:41:37.938203] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:32.823 [2024-11-19 12:41:37.938263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.823 [2024-11-19 12:41:37.938278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:32.823 [2024-11-19 12:41:37.942323] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:32.823 [2024-11-19 12:41:37.942358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.823 [2024-11-19 12:41:37.942387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:32.823 [2024-11-19 12:41:37.946473] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:32.823 [2024-11-19 12:41:37.946509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.823 [2024-11-19 12:41:37.946538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:32.823 [2024-11-19 12:41:37.950394] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:32.823 [2024-11-19 12:41:37.950429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.823 [2024-11-19 12:41:37.950458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.823 [2024-11-19 12:41:37.954319] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:32.823 [2024-11-19 12:41:37.954353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.823 [2024-11-19 12:41:37.954382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:32.823 [2024-11-19 12:41:37.958281] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:32.823 [2024-11-19 12:41:37.958317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.823 [2024-11-19 12:41:37.958346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:32.823 [2024-11-19 12:41:37.962231] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:32.823 [2024-11-19 12:41:37.962265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.823 [2024-11-19 12:41:37.962294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:32.823 [2024-11-19 12:41:37.966178] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:32.823 [2024-11-19 12:41:37.966213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.823 [2024-11-19 12:41:37.966241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.823 [2024-11-19 12:41:37.970085] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:32.823 [2024-11-19 12:41:37.970119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.823 [2024-11-19 12:41:37.970148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:32.823 [2024-11-19 12:41:37.974013] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:32.823 [2024-11-19 12:41:37.974047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.823 [2024-11-19 12:41:37.974075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:32.823 [2024-11-19 12:41:37.977984] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:32.823 [2024-11-19 12:41:37.978019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.823 [2024-11-19 12:41:37.978047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:32.823 [2024-11-19 12:41:37.981931] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:32.823 [2024-11-19 12:41:37.981964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.823 [2024-11-19 12:41:37.981993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.823 [2024-11-19 12:41:37.985825] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:32.823 [2024-11-19 12:41:37.985858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.823 [2024-11-19 12:41:37.985886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:32.823 [2024-11-19 12:41:37.989773] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:32.823 [2024-11-19 12:41:37.989806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.823 [2024-11-19 12:41:37.989835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:32.823 [2024-11-19 12:41:37.993622] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:32.823 [2024-11-19 12:41:37.993657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.823 [2024-11-19 12:41:37.993702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:32.823 [2024-11-19 12:41:37.997556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:32.823 [2024-11-19 12:41:37.997591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.823 [2024-11-19 12:41:37.997619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.823 [2024-11-19 12:41:38.001546] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:32.823 [2024-11-19 12:41:38.001580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.823 [2024-11-19 12:41:38.001609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:32.823 [2024-11-19 12:41:38.005531] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:32.823 [2024-11-19 12:41:38.005565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.823 [2024-11-19 12:41:38.005593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:32.823 [2024-11-19 12:41:38.009610] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:32.823 [2024-11-19 12:41:38.009644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.823 [2024-11-19 12:41:38.009674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:32.823 [2024-11-19 12:41:38.013548] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:32.823 [2024-11-19 12:41:38.013581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.823 [2024-11-19 12:41:38.013610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.823 [2024-11-19 12:41:38.017748] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:32.823 [2024-11-19 12:41:38.017782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.823 [2024-11-19 12:41:38.017811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:32.823 [2024-11-19 12:41:38.021630] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:32.823 [2024-11-19 12:41:38.021693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.823 [2024-11-19 12:41:38.021707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:32.823 [2024-11-19 12:41:38.025527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:32.823 [2024-11-19 12:41:38.025561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.823 [2024-11-19 12:41:38.025590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:32.823 [2024-11-19 12:41:38.029548] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:32.823 [2024-11-19 12:41:38.029581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.823 [2024-11-19 12:41:38.029611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.823 [2024-11-19 12:41:38.033540] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:32.823 [2024-11-19 12:41:38.033575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.823 [2024-11-19 12:41:38.033604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:32.824 [2024-11-19 12:41:38.037547] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:32.824 [2024-11-19 12:41:38.037581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.824 [2024-11-19 12:41:38.037610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:32.824 [2024-11-19 12:41:38.041595] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:32.824 [2024-11-19 12:41:38.041630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.824 [2024-11-19 12:41:38.041658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:32.824 [2024-11-19 12:41:38.045557] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:32.824 [2024-11-19 12:41:38.045592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.824 [2024-11-19 12:41:38.045621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.824 [2024-11-19 12:41:38.049624] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:32.824 [2024-11-19 12:41:38.049659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.824 [2024-11-19 12:41:38.049702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:32.824 [2024-11-19 12:41:38.053593] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:32.824 [2024-11-19 12:41:38.053626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.824 [2024-11-19 12:41:38.053655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:32.824 [2024-11-19 12:41:38.057508] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:32.824 [2024-11-19 12:41:38.057542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.824 [2024-11-19 12:41:38.057571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:32.824 [2024-11-19 12:41:38.061539] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:32.824 [2024-11-19 12:41:38.061572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.824 [2024-11-19 12:41:38.061601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.824 [2024-11-19 12:41:38.065495] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:32.824 [2024-11-19 12:41:38.065528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.824 [2024-11-19 12:41:38.065557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:32.824 [2024-11-19 12:41:38.069504] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:32.824 [2024-11-19 12:41:38.069538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.824 [2024-11-19 12:41:38.069567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:32.824 [2024-11-19 12:41:38.073489] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:32.824 [2024-11-19 12:41:38.073524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.824 [2024-11-19 12:41:38.073552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:32.824 [2024-11-19 12:41:38.078003] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:32.824 [2024-11-19 12:41:38.078038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.824 [2024-11-19 12:41:38.078066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.085 [2024-11-19 12:41:38.082301] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.085 [2024-11-19 12:41:38.082336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.085 [2024-11-19 12:41:38.082364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.085 [2024-11-19 12:41:38.086716] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.085 [2024-11-19 12:41:38.086750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.085 [2024-11-19 12:41:38.086778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.085 [2024-11-19 12:41:38.090594] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.085 [2024-11-19 12:41:38.090798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.085 [2024-11-19 12:41:38.090814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.085 [2024-11-19 12:41:38.095245] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.085 [2024-11-19 12:41:38.095280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.085 [2024-11-19 12:41:38.095333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.085 [2024-11-19 12:41:38.099695] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.085 [2024-11-19 12:41:38.099761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.085 [2024-11-19 12:41:38.099775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.085 [2024-11-19 12:41:38.103909] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.085 [2024-11-19 12:41:38.103945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.085 [2024-11-19 12:41:38.103958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.085 [2024-11-19 12:41:38.108151] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.085 [2024-11-19 12:41:38.108204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.085 [2024-11-19 12:41:38.108217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.085 [2024-11-19 12:41:38.112546] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.085 [2024-11-19 12:41:38.112584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.085 [2024-11-19 12:41:38.112597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.085 [2024-11-19 12:41:38.117150] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.085 [2024-11-19 12:41:38.117185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.085 [2024-11-19 12:41:38.117198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.085 [2024-11-19 12:41:38.121598] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.085 [2024-11-19 12:41:38.121633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.086 [2024-11-19 12:41:38.121645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.086 [2024-11-19 12:41:38.125963] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.086 [2024-11-19 12:41:38.125998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.086 [2024-11-19 12:41:38.126028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.086 [2024-11-19 12:41:38.130121] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.086 [2024-11-19 12:41:38.130157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.086 [2024-11-19 12:41:38.130169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.086 [2024-11-19 12:41:38.134254] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.086 [2024-11-19 12:41:38.134289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.086 [2024-11-19 12:41:38.134301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.086 [2024-11-19 12:41:38.138391] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.086 [2024-11-19 12:41:38.138459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.086 [2024-11-19 12:41:38.138488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.086 [2024-11-19 12:41:38.142497] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.086 [2024-11-19 12:41:38.142533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.086 [2024-11-19 12:41:38.142545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.086 [2024-11-19 12:41:38.146617] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.086 [2024-11-19 12:41:38.146653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.086 [2024-11-19 12:41:38.146694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.086 [2024-11-19 12:41:38.150620] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.086 [2024-11-19 12:41:38.150656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.086 [2024-11-19 12:41:38.150715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.086 [2024-11-19 12:41:38.154765] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.086 [2024-11-19 12:41:38.154814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.086 [2024-11-19 12:41:38.154828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.086 [2024-11-19 12:41:38.158983] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.086 [2024-11-19 12:41:38.159021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.086 [2024-11-19 12:41:38.159034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.086 [2024-11-19 12:41:38.162922] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.086 [2024-11-19 12:41:38.162958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.086 [2024-11-19 12:41:38.162971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.086 [2024-11-19 12:41:38.167029] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.086 [2024-11-19 12:41:38.167080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.086 [2024-11-19 12:41:38.167093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.086 [2024-11-19 12:41:38.171115] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.086 [2024-11-19 12:41:38.171153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.086 [2024-11-19 12:41:38.171166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.086 [2024-11-19 12:41:38.175202] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.086 [2024-11-19 12:41:38.175237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.086 [2024-11-19 12:41:38.175249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.086 [2024-11-19 12:41:38.179213] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.086 [2024-11-19 12:41:38.179250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.086 [2024-11-19 12:41:38.179262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.086 [2024-11-19 12:41:38.183191] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.086 [2024-11-19 12:41:38.183226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.086 [2024-11-19 12:41:38.183238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.086 [2024-11-19 12:41:38.187280] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.086 [2024-11-19 12:41:38.187359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.086 [2024-11-19 12:41:38.187388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.086 [2024-11-19 12:41:38.191451] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.086 [2024-11-19 12:41:38.191489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.086 [2024-11-19 12:41:38.191502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.086 [2024-11-19 12:41:38.195577] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.086 [2024-11-19 12:41:38.195615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.086 [2024-11-19 12:41:38.195642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.086 [2024-11-19 12:41:38.199650] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.086 [2024-11-19 12:41:38.199729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.086 [2024-11-19 12:41:38.199743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.086 [2024-11-19 12:41:38.203910] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.086 [2024-11-19 12:41:38.203960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.086 [2024-11-19 12:41:38.203973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.086 [2024-11-19 12:41:38.207978] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.086 [2024-11-19 12:41:38.208013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.086 [2024-11-19 12:41:38.208026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.086 [2024-11-19 12:41:38.211932] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.086 [2024-11-19 12:41:38.211966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.086 [2024-11-19 12:41:38.211978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.086 [2024-11-19 12:41:38.215873] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.086 [2024-11-19 12:41:38.215907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.086 [2024-11-19 12:41:38.215919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.086 [2024-11-19 12:41:38.220076] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.086 [2024-11-19 12:41:38.220127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.086 [2024-11-19 12:41:38.220141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.086 [2024-11-19 12:41:38.224138] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.086 [2024-11-19 12:41:38.224184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.086 [2024-11-19 12:41:38.224197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.086 [2024-11-19 12:41:38.228187] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.086 [2024-11-19 12:41:38.228223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.086 [2024-11-19 12:41:38.228235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.087 [2024-11-19 12:41:38.232154] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.087 [2024-11-19 12:41:38.232189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.087 [2024-11-19 12:41:38.232201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.087 [2024-11-19 12:41:38.236243] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.087 [2024-11-19 12:41:38.236279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.087 [2024-11-19 12:41:38.236292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.087 [2024-11-19 12:41:38.240300] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.087 [2024-11-19 12:41:38.240336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.087 [2024-11-19 12:41:38.240348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.087 [2024-11-19 12:41:38.244314] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.087 [2024-11-19 12:41:38.244349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.087 [2024-11-19 12:41:38.244362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.087 [2024-11-19 12:41:38.248359] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.087 [2024-11-19 12:41:38.248395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.087 [2024-11-19 12:41:38.248408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.087 [2024-11-19 12:41:38.252552] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.087 [2024-11-19 12:41:38.252587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.087 [2024-11-19 12:41:38.252599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.087 [2024-11-19 12:41:38.256658] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.087 [2024-11-19 12:41:38.256702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.087 [2024-11-19 12:41:38.256715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.087 [2024-11-19 12:41:38.260702] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.087 [2024-11-19 12:41:38.260736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.087 [2024-11-19 12:41:38.260748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.087 [2024-11-19 12:41:38.264726] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.087 [2024-11-19 12:41:38.264760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.087 [2024-11-19 12:41:38.264772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.087 [2024-11-19 12:41:38.268917] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.087 [2024-11-19 12:41:38.268952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.087 [2024-11-19 12:41:38.268964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.087 [2024-11-19 12:41:38.272924] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.087 [2024-11-19 12:41:38.272959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.087 [2024-11-19 12:41:38.272971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.087 [2024-11-19 12:41:38.276894] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.087 [2024-11-19 12:41:38.276930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.087 [2024-11-19 12:41:38.276942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.087 [2024-11-19 12:41:38.280910] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.087 [2024-11-19 12:41:38.280945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.087 [2024-11-19 12:41:38.280957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.087 [2024-11-19 12:41:38.285154] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.087 [2024-11-19 12:41:38.285189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.087 [2024-11-19 12:41:38.285219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.087 [2024-11-19 12:41:38.289171] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.087 [2024-11-19 12:41:38.289204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.087 [2024-11-19 12:41:38.289234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.087 [2024-11-19 12:41:38.293217] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.087 [2024-11-19 12:41:38.293250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.087 [2024-11-19 12:41:38.293280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.087 [2024-11-19 12:41:38.297321] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.087 [2024-11-19 12:41:38.297354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.087 [2024-11-19 12:41:38.297385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.087 [2024-11-19 12:41:38.301287] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.087 [2024-11-19 12:41:38.301322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.087 [2024-11-19 12:41:38.301352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.087 [2024-11-19 12:41:38.305279] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.087 [2024-11-19 12:41:38.305313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.087 [2024-11-19 12:41:38.305342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.087 [2024-11-19 12:41:38.309248] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.087 [2024-11-19 12:41:38.309282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.087 [2024-11-19 12:41:38.309312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.087 [2024-11-19 12:41:38.313131] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.087 [2024-11-19 12:41:38.313164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.087 [2024-11-19 12:41:38.313194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.087 [2024-11-19 12:41:38.317091] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.087 [2024-11-19 12:41:38.317125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.087 [2024-11-19 12:41:38.317155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.087 [2024-11-19 12:41:38.321038] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.087 [2024-11-19 12:41:38.321072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.087 [2024-11-19 12:41:38.321102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.087 [2024-11-19 12:41:38.325007] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.087 [2024-11-19 12:41:38.325042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.087 [2024-11-19 12:41:38.325072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.087 [2024-11-19 12:41:38.328991] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.087 [2024-11-19 12:41:38.329024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.087 [2024-11-19 12:41:38.329053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.087 [2024-11-19 12:41:38.332934] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.087 [2024-11-19 12:41:38.332968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.087 [2024-11-19 12:41:38.332998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.087 [2024-11-19 12:41:38.336932] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.087 [2024-11-19 12:41:38.336968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.088 [2024-11-19 12:41:38.336999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.348 [2024-11-19 12:41:38.341341] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.348 [2024-11-19 12:41:38.341378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.348 [2024-11-19 12:41:38.341409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.348 [2024-11-19 12:41:38.345560] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.348 [2024-11-19 12:41:38.345594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.348 [2024-11-19 12:41:38.345625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.348 [2024-11-19 12:41:38.349926] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.348 [2024-11-19 12:41:38.349960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.348 [2024-11-19 12:41:38.349990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.348 [2024-11-19 12:41:38.353841] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.348 [2024-11-19 12:41:38.353874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.348 [2024-11-19 12:41:38.353904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.348 [2024-11-19 12:41:38.357730] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.348 [2024-11-19 12:41:38.357763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.348 [2024-11-19 12:41:38.357793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.349 [2024-11-19 12:41:38.361693] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.349 [2024-11-19 12:41:38.361726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.349 [2024-11-19 12:41:38.361756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.349 [2024-11-19 12:41:38.365614] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.349 [2024-11-19 12:41:38.365648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.349 [2024-11-19 12:41:38.365678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.349 [2024-11-19 12:41:38.369544] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.349 [2024-11-19 12:41:38.369578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.349 [2024-11-19 12:41:38.369608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.349 [2024-11-19 12:41:38.373515] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.349 [2024-11-19 12:41:38.373549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.349 [2024-11-19 12:41:38.373579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.349 [2024-11-19 12:41:38.377502] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.349 [2024-11-19 12:41:38.377536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.349 [2024-11-19 12:41:38.377566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.349 [2024-11-19 12:41:38.381472] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.349 [2024-11-19 12:41:38.381506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.349 [2024-11-19 12:41:38.381536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.349 [2024-11-19 12:41:38.385399] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.349 [2024-11-19 12:41:38.385433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.349 [2024-11-19 12:41:38.385463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.349 [2024-11-19 12:41:38.389319] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.349 [2024-11-19 12:41:38.389353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.349 [2024-11-19 12:41:38.389382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.349 [2024-11-19 12:41:38.393243] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.349 [2024-11-19 12:41:38.393276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.349 [2024-11-19 12:41:38.393305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.349 [2024-11-19 12:41:38.397156] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.349 [2024-11-19 12:41:38.397190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.349 [2024-11-19 12:41:38.397218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.349 [2024-11-19 12:41:38.401069] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.349 [2024-11-19 12:41:38.401103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.349 [2024-11-19 12:41:38.401132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.349 [2024-11-19 12:41:38.405114] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.349 [2024-11-19 12:41:38.405148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.349 [2024-11-19 12:41:38.405177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.349 [2024-11-19 12:41:38.409063] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.349 [2024-11-19 12:41:38.409097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.349 [2024-11-19 12:41:38.409142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.349 [2024-11-19 12:41:38.413037] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.349 [2024-11-19 12:41:38.413070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.349 [2024-11-19 12:41:38.413098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.349 [2024-11-19 12:41:38.417045] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.349 [2024-11-19 12:41:38.417078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.349 [2024-11-19 12:41:38.417107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.349 [2024-11-19 12:41:38.421010] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.349 [2024-11-19 12:41:38.421044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.349 [2024-11-19 12:41:38.421073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.349 [2024-11-19 12:41:38.424981] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.349 [2024-11-19 12:41:38.425014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.349 [2024-11-19 12:41:38.425043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.349 [2024-11-19 12:41:38.428952] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.349 [2024-11-19 12:41:38.428985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.349 [2024-11-19 12:41:38.429015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.349 [2024-11-19 12:41:38.432999] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.349 [2024-11-19 12:41:38.433032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.349 [2024-11-19 12:41:38.433060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.349 [2024-11-19 12:41:38.436901] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.349 [2024-11-19 12:41:38.436935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.349 [2024-11-19 12:41:38.436964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.349 [2024-11-19 12:41:38.440859] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.349 [2024-11-19 12:41:38.440892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.349 [2024-11-19 12:41:38.440921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.349 [2024-11-19 12:41:38.444849] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.349 [2024-11-19 12:41:38.444883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.349 [2024-11-19 12:41:38.444911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.349 [2024-11-19 12:41:38.448825] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.349 [2024-11-19 12:41:38.448858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.349 [2024-11-19 12:41:38.448886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.349 [2024-11-19 12:41:38.452732] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.349 [2024-11-19 12:41:38.452765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.349 [2024-11-19 12:41:38.452794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.349 [2024-11-19 12:41:38.456750] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.349 [2024-11-19 12:41:38.456782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.349 [2024-11-19 12:41:38.456811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.349 [2024-11-19 12:41:38.460718] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.349 [2024-11-19 12:41:38.460751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.349 [2024-11-19 12:41:38.460780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.349 [2024-11-19 12:41:38.464543] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.349 [2024-11-19 12:41:38.464722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.350 [2024-11-19 12:41:38.464755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.350 [2024-11-19 12:41:38.468802] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.350 [2024-11-19 12:41:38.468836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.350 [2024-11-19 12:41:38.468865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.350 [2024-11-19 12:41:38.472765] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.350 [2024-11-19 12:41:38.472798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.350 [2024-11-19 12:41:38.472827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.350 [2024-11-19 12:41:38.476655] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.350 [2024-11-19 12:41:38.476699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.350 [2024-11-19 12:41:38.476728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.350 [2024-11-19 12:41:38.480668] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.350 [2024-11-19 12:41:38.480712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.350 [2024-11-19 12:41:38.480742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.350 [2024-11-19 12:41:38.484561] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.350 [2024-11-19 12:41:38.484760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.350 [2024-11-19 12:41:38.484778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.350 [2024-11-19 12:41:38.488798] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.350 [2024-11-19 12:41:38.488832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.350 [2024-11-19 12:41:38.488861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.350 [2024-11-19 12:41:38.492769] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.350 [2024-11-19 12:41:38.492804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.350 [2024-11-19 12:41:38.492832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.350 [2024-11-19 12:41:38.496767] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.350 [2024-11-19 12:41:38.496800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.350 [2024-11-19 12:41:38.496829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.350 [2024-11-19 12:41:38.500691] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.350 [2024-11-19 12:41:38.500724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.350 [2024-11-19 12:41:38.500753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.350 [2024-11-19 12:41:38.504715] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.350 [2024-11-19 12:41:38.504748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.350 [2024-11-19 12:41:38.504777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.350 [2024-11-19 12:41:38.508651] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.350 [2024-11-19 12:41:38.508851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.350 [2024-11-19 12:41:38.508867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.350 [2024-11-19 12:41:38.512808] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.350 [2024-11-19 12:41:38.512842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.350 [2024-11-19 12:41:38.512871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.350 [2024-11-19 12:41:38.516809] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.350 [2024-11-19 12:41:38.516843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.350 [2024-11-19 12:41:38.516872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.350 [2024-11-19 12:41:38.520803] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.350 [2024-11-19 12:41:38.520837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.350 [2024-11-19 12:41:38.520866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.350 [2024-11-19 12:41:38.524706] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.350 [2024-11-19 12:41:38.524738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.350 [2024-11-19 12:41:38.524768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.350 [2024-11-19 12:41:38.528611] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.350 [2024-11-19 12:41:38.528808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.350 [2024-11-19 12:41:38.528825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.350 [2024-11-19 12:41:38.532721] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.350 [2024-11-19 12:41:38.532755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.350 [2024-11-19 12:41:38.532784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.350 [2024-11-19 12:41:38.536664] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.350 [2024-11-19 12:41:38.536863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.350 [2024-11-19 12:41:38.536880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.350 [2024-11-19 12:41:38.540861] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.350 [2024-11-19 12:41:38.540896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.350 [2024-11-19 12:41:38.540926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.350 [2024-11-19 12:41:38.544779] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.350 [2024-11-19 12:41:38.544812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.350 [2024-11-19 12:41:38.544841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.350 [2024-11-19 12:41:38.548707] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.350 [2024-11-19 12:41:38.548741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.350 [2024-11-19 12:41:38.548768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.350 [2024-11-19 12:41:38.552512] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.350 [2024-11-19 12:41:38.552706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.350 [2024-11-19 12:41:38.552724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.350 [2024-11-19 12:41:38.556617] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.350 [2024-11-19 12:41:38.556807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.350 [2024-11-19 12:41:38.556824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.350 [2024-11-19 12:41:38.560677] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.350 [2024-11-19 12:41:38.560723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.350 [2024-11-19 12:41:38.560752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.350 [2024-11-19 12:41:38.564529] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.350 [2024-11-19 12:41:38.564722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.350 [2024-11-19 12:41:38.564740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.350 [2024-11-19 12:41:38.568567] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.350 [2024-11-19 12:41:38.568742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.350 [2024-11-19 12:41:38.568776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.350 [2024-11-19 12:41:38.572735] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.351 [2024-11-19 12:41:38.572769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.351 [2024-11-19 12:41:38.572798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.351 [2024-11-19 12:41:38.576718] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.351 [2024-11-19 12:41:38.576753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.351 [2024-11-19 12:41:38.576781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.351 [2024-11-19 12:41:38.580619] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.351 [2024-11-19 12:41:38.580817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.351 [2024-11-19 12:41:38.580833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.351 [2024-11-19 12:41:38.584810] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.351 [2024-11-19 12:41:38.584845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.351 [2024-11-19 12:41:38.584858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.351 [2024-11-19 12:41:38.588765] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.351 [2024-11-19 12:41:38.588798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.351 [2024-11-19 12:41:38.588827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.351 [2024-11-19 12:41:38.592691] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.351 [2024-11-19 12:41:38.592723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.351 [2024-11-19 12:41:38.592752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.351 [2024-11-19 12:41:38.596716] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.351 [2024-11-19 12:41:38.596750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.351 [2024-11-19 12:41:38.596778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.351 [2024-11-19 12:41:38.600873] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.351 [2024-11-19 12:41:38.600910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.351 [2024-11-19 12:41:38.600940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.611 [2024-11-19 12:41:38.605382] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.611 [2024-11-19 12:41:38.605418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.611 [2024-11-19 12:41:38.605448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.611 [2024-11-19 12:41:38.609490] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.611 [2024-11-19 12:41:38.609524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.611 [2024-11-19 12:41:38.609552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.611 [2024-11-19 12:41:38.613803] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.611 [2024-11-19 12:41:38.613836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.611 [2024-11-19 12:41:38.613864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.611 [2024-11-19 12:41:38.617840] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.611 [2024-11-19 12:41:38.617874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.611 [2024-11-19 12:41:38.617902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.611 [2024-11-19 12:41:38.621786] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.612 [2024-11-19 12:41:38.621819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.612 [2024-11-19 12:41:38.621847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.612 [2024-11-19 12:41:38.625711] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.612 [2024-11-19 12:41:38.625744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.612 [2024-11-19 12:41:38.625773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.612 [2024-11-19 12:41:38.629650] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.612 [2024-11-19 12:41:38.629712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.612 [2024-11-19 12:41:38.629725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.612 [2024-11-19 12:41:38.633598] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.612 [2024-11-19 12:41:38.633632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.612 [2024-11-19 12:41:38.633660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.612 [2024-11-19 12:41:38.637545] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.612 [2024-11-19 12:41:38.637579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.612 [2024-11-19 12:41:38.637608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.612 [2024-11-19 12:41:38.641625] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.612 [2024-11-19 12:41:38.641661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.612 [2024-11-19 12:41:38.641702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.612 [2024-11-19 12:41:38.645534] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.612 [2024-11-19 12:41:38.645568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.612 [2024-11-19 12:41:38.645597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.612 [2024-11-19 12:41:38.649502] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.612 [2024-11-19 12:41:38.649536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.612 [2024-11-19 12:41:38.649565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.612 [2024-11-19 12:41:38.653582] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.612 [2024-11-19 12:41:38.653616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.612 [2024-11-19 12:41:38.653644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.612 [2024-11-19 12:41:38.657523] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.612 [2024-11-19 12:41:38.657557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.612 [2024-11-19 12:41:38.657585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.612 [2024-11-19 12:41:38.661472] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.612 [2024-11-19 12:41:38.661506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.612 [2024-11-19 12:41:38.661535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.612 [2024-11-19 12:41:38.665414] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.612 [2024-11-19 12:41:38.665448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.612 [2024-11-19 12:41:38.665476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.612 [2024-11-19 12:41:38.669370] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.612 [2024-11-19 12:41:38.669404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.612 [2024-11-19 12:41:38.669432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.612 [2024-11-19 12:41:38.673500] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.612 [2024-11-19 12:41:38.673535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.612 [2024-11-19 12:41:38.673563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.612 [2024-11-19 12:41:38.677461] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.612 [2024-11-19 12:41:38.677494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.612 [2024-11-19 12:41:38.677522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.612 [2024-11-19 12:41:38.681342] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.612 [2024-11-19 12:41:38.681376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.612 [2024-11-19 12:41:38.681405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.612 [2024-11-19 12:41:38.685302] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.612 [2024-11-19 12:41:38.685335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.612 [2024-11-19 12:41:38.685364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.612 [2024-11-19 12:41:38.689350] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.612 [2024-11-19 12:41:38.689385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.612 [2024-11-19 12:41:38.689413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.612 [2024-11-19 12:41:38.693323] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.612 [2024-11-19 12:41:38.693357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.612 [2024-11-19 12:41:38.693386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.612 [2024-11-19 12:41:38.697377] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.612 [2024-11-19 12:41:38.697411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.612 [2024-11-19 12:41:38.697438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.612 [2024-11-19 12:41:38.701387] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.612 [2024-11-19 12:41:38.701421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.612 [2024-11-19 12:41:38.701451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.612 [2024-11-19 12:41:38.705456] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.612 [2024-11-19 12:41:38.705490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.612 [2024-11-19 12:41:38.705518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.612 [2024-11-19 12:41:38.709431] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.612 [2024-11-19 12:41:38.709466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.612 [2024-11-19 12:41:38.709494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.612 [2024-11-19 12:41:38.713468] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.612 [2024-11-19 12:41:38.713501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.612 [2024-11-19 12:41:38.713529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.612 [2024-11-19 12:41:38.717404] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.612 [2024-11-19 12:41:38.717438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.612 [2024-11-19 12:41:38.717482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.612 [2024-11-19 12:41:38.721378] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.612 [2024-11-19 12:41:38.721412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.612 [2024-11-19 12:41:38.721440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.612 [2024-11-19 12:41:38.725371] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.612 [2024-11-19 12:41:38.725406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.612 [2024-11-19 12:41:38.725434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.613 [2024-11-19 12:41:38.729348] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.613 [2024-11-19 12:41:38.729382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.613 [2024-11-19 12:41:38.729410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.613 [2024-11-19 12:41:38.733427] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.613 [2024-11-19 12:41:38.733462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.613 [2024-11-19 12:41:38.733490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.613 [2024-11-19 12:41:38.737584] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.613 [2024-11-19 12:41:38.737619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.613 [2024-11-19 12:41:38.737647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.613 [2024-11-19 12:41:38.741751] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.613 [2024-11-19 12:41:38.741785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.613 [2024-11-19 12:41:38.741813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.613 [2024-11-19 12:41:38.745846] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.613 [2024-11-19 12:41:38.745883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.613 [2024-11-19 12:41:38.745912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.613 [2024-11-19 12:41:38.749896] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.613 [2024-11-19 12:41:38.749931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.613 [2024-11-19 12:41:38.749959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.613 [2024-11-19 12:41:38.753980] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.613 [2024-11-19 12:41:38.754015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.613 [2024-11-19 12:41:38.754043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.613 [2024-11-19 12:41:38.758015] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.613 [2024-11-19 12:41:38.758048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.613 [2024-11-19 12:41:38.758077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.613 [2024-11-19 12:41:38.762107] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.613 [2024-11-19 12:41:38.762141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.613 [2024-11-19 12:41:38.762170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.613 [2024-11-19 12:41:38.766156] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.613 [2024-11-19 12:41:38.766190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.613 [2024-11-19 12:41:38.766218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.613 [2024-11-19 12:41:38.770175] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.613 [2024-11-19 12:41:38.770209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.613 [2024-11-19 12:41:38.770237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.613 [2024-11-19 12:41:38.774126] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.613 [2024-11-19 12:41:38.774160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.613 [2024-11-19 12:41:38.774188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.613 [2024-11-19 12:41:38.778054] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.613 [2024-11-19 12:41:38.778087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.613 [2024-11-19 12:41:38.778115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.613 [2024-11-19 12:41:38.781999] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.613 [2024-11-19 12:41:38.782033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.613 [2024-11-19 12:41:38.782061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.613 [2024-11-19 12:41:38.785940] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.613 [2024-11-19 12:41:38.785974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.613 [2024-11-19 12:41:38.786002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.613 [2024-11-19 12:41:38.789900] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.613 [2024-11-19 12:41:38.789933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.613 [2024-11-19 12:41:38.789962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.613 [2024-11-19 12:41:38.793834] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.613 [2024-11-19 12:41:38.793867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.613 [2024-11-19 12:41:38.793895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.613 [2024-11-19 12:41:38.797756] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.613 [2024-11-19 12:41:38.797789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.613 [2024-11-19 12:41:38.797817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.613 [2024-11-19 12:41:38.801686] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.613 [2024-11-19 12:41:38.801736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.613 [2024-11-19 12:41:38.801764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.613 [2024-11-19 12:41:38.805702] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.613 [2024-11-19 12:41:38.805734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.613 [2024-11-19 12:41:38.805763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.613 [2024-11-19 12:41:38.809611] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.613 [2024-11-19 12:41:38.809645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.613 [2024-11-19 12:41:38.809673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.613 [2024-11-19 12:41:38.813634] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.613 [2024-11-19 12:41:38.813691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.613 [2024-11-19 12:41:38.813705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.613 [2024-11-19 12:41:38.817686] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.613 [2024-11-19 12:41:38.817748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.613 [2024-11-19 12:41:38.817762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.613 [2024-11-19 12:41:38.821740] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.613 [2024-11-19 12:41:38.821773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.613 [2024-11-19 12:41:38.821802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.613 [2024-11-19 12:41:38.825739] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.613 [2024-11-19 12:41:38.825771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.613 [2024-11-19 12:41:38.825800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.613 [2024-11-19 12:41:38.829624] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.613 [2024-11-19 12:41:38.829658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.613 [2024-11-19 12:41:38.829700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.613 [2024-11-19 12:41:38.833592] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.613 [2024-11-19 12:41:38.833626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.613 [2024-11-19 12:41:38.833655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.614 [2024-11-19 12:41:38.837464] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.614 [2024-11-19 12:41:38.837497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.614 [2024-11-19 12:41:38.837525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.614 [2024-11-19 12:41:38.841443] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.614 [2024-11-19 12:41:38.841477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.614 [2024-11-19 12:41:38.841505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.614 [2024-11-19 12:41:38.845405] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.614 [2024-11-19 12:41:38.845439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.614 [2024-11-19 12:41:38.845467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.614 [2024-11-19 12:41:38.849463] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.614 [2024-11-19 12:41:38.849498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.614 [2024-11-19 12:41:38.849526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.614 [2024-11-19 12:41:38.853372] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.614 [2024-11-19 12:41:38.853405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.614 [2024-11-19 12:41:38.853434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.614 [2024-11-19 12:41:38.857337] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.614 [2024-11-19 12:41:38.857371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.614 [2024-11-19 12:41:38.857399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.614 [2024-11-19 12:41:38.861393] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.614 [2024-11-19 12:41:38.861427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.614 [2024-11-19 12:41:38.861455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.614 [2024-11-19 12:41:38.865787] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.614 [2024-11-19 12:41:38.865838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.614 [2024-11-19 12:41:38.865882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.874 [2024-11-19 12:41:38.870199] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.874 [2024-11-19 12:41:38.870233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.874 [2024-11-19 12:41:38.870262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.874 [2024-11-19 12:41:38.874540] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.874 [2024-11-19 12:41:38.874575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.874 [2024-11-19 12:41:38.874604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.874 [2024-11-19 12:41:38.878623] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.875 [2024-11-19 12:41:38.878657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.875 [2024-11-19 12:41:38.878699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.875 [2024-11-19 12:41:38.882576] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.875 [2024-11-19 12:41:38.882611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.875 [2024-11-19 12:41:38.882639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.875 [2024-11-19 12:41:38.886488] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.875 [2024-11-19 12:41:38.886522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.875 [2024-11-19 12:41:38.886551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.875 [2024-11-19 12:41:38.890560] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.875 [2024-11-19 12:41:38.890595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.875 [2024-11-19 12:41:38.890624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.875 [2024-11-19 12:41:38.894604] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.875 [2024-11-19 12:41:38.894639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.875 [2024-11-19 12:41:38.894668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.875 [2024-11-19 12:41:38.898714] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.875 [2024-11-19 12:41:38.898746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.875 [2024-11-19 12:41:38.898757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.875 [2024-11-19 12:41:38.902866] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.875 [2024-11-19 12:41:38.902901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.875 [2024-11-19 12:41:38.902913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.875 [2024-11-19 12:41:38.906951] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.875 [2024-11-19 12:41:38.906987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.875 [2024-11-19 12:41:38.906999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.875 [2024-11-19 12:41:38.911462] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.875 [2024-11-19 12:41:38.911506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.875 [2024-11-19 12:41:38.911519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.875 [2024-11-19 12:41:38.916223] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.875 [2024-11-19 12:41:38.916428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.875 [2024-11-19 12:41:38.916462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.875 [2024-11-19 12:41:38.921293] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.875 [2024-11-19 12:41:38.921327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.875 [2024-11-19 12:41:38.921357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.875 [2024-11-19 12:41:38.925614] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.875 [2024-11-19 12:41:38.925648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.875 [2024-11-19 12:41:38.925693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.875 [2024-11-19 12:41:38.931538] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0 7611.00 IOPS, 951.38 MiB/s [2024-11-19T12:41:39.135Z] x1b71220) 00:21:33.875 [2024-11-19 12:41:38.931777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.875 [2024-11-19 12:41:38.931794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.875 [2024-11-19 12:41:38.936216] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.875 [2024-11-19 12:41:38.936251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.875 [2024-11-19 12:41:38.936279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.875 [2024-11-19 12:41:38.940525] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.875 [2024-11-19 12:41:38.940560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.875 [2024-11-19 12:41:38.940589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.875 [2024-11-19 12:41:38.944882] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.875 [2024-11-19 12:41:38.944918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.875 [2024-11-19 12:41:38.944930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.875 [2024-11-19 12:41:38.949071] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.875 [2024-11-19 12:41:38.949104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.875 [2024-11-19 12:41:38.949133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.875 [2024-11-19 12:41:38.953138] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.875 [2024-11-19 12:41:38.953183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.875 [2024-11-19 12:41:38.953213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.875 [2024-11-19 12:41:38.957165] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.875 [2024-11-19 12:41:38.957200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.875 [2024-11-19 12:41:38.957228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.875 [2024-11-19 12:41:38.961285] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.875 [2024-11-19 12:41:38.961319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.875 [2024-11-19 12:41:38.961348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.875 [2024-11-19 12:41:38.965253] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.875 [2024-11-19 12:41:38.965287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.875 [2024-11-19 12:41:38.965316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.875 [2024-11-19 12:41:38.969209] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.875 [2024-11-19 12:41:38.969243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.875 [2024-11-19 12:41:38.969271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.875 [2024-11-19 12:41:38.973167] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.875 [2024-11-19 12:41:38.973200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.875 [2024-11-19 12:41:38.973229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.875 [2024-11-19 12:41:38.977145] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.875 [2024-11-19 12:41:38.977179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.875 [2024-11-19 12:41:38.977207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.875 [2024-11-19 12:41:38.981140] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.875 [2024-11-19 12:41:38.981174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.875 [2024-11-19 12:41:38.981202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.875 [2024-11-19 12:41:38.985106] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.875 [2024-11-19 12:41:38.985139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.875 [2024-11-19 12:41:38.985167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.875 [2024-11-19 12:41:38.989045] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.876 [2024-11-19 12:41:38.989078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.876 [2024-11-19 12:41:38.989106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.876 [2024-11-19 12:41:38.992973] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.876 [2024-11-19 12:41:38.993006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.876 [2024-11-19 12:41:38.993034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.876 [2024-11-19 12:41:38.996869] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.876 [2024-11-19 12:41:38.996902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.876 [2024-11-19 12:41:38.996930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.876 [2024-11-19 12:41:39.000761] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.876 [2024-11-19 12:41:39.000794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.876 [2024-11-19 12:41:39.000821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.876 [2024-11-19 12:41:39.004764] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.876 [2024-11-19 12:41:39.004797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.876 [2024-11-19 12:41:39.004826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.876 [2024-11-19 12:41:39.008739] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.876 [2024-11-19 12:41:39.008772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.876 [2024-11-19 12:41:39.008800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.876 [2024-11-19 12:41:39.012625] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.876 [2024-11-19 12:41:39.012659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.876 [2024-11-19 12:41:39.012700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.876 [2024-11-19 12:41:39.016503] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.876 [2024-11-19 12:41:39.016538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.876 [2024-11-19 12:41:39.016566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.876 [2024-11-19 12:41:39.020494] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.876 [2024-11-19 12:41:39.020527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.876 [2024-11-19 12:41:39.020556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.876 [2024-11-19 12:41:39.024431] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.876 [2024-11-19 12:41:39.024465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.876 [2024-11-19 12:41:39.024493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.876 [2024-11-19 12:41:39.028476] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.876 [2024-11-19 12:41:39.028511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.876 [2024-11-19 12:41:39.028540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.876 [2024-11-19 12:41:39.032451] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.876 [2024-11-19 12:41:39.032484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.876 [2024-11-19 12:41:39.032513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.876 [2024-11-19 12:41:39.036401] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.876 [2024-11-19 12:41:39.036435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.876 [2024-11-19 12:41:39.036463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.876 [2024-11-19 12:41:39.040358] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.876 [2024-11-19 12:41:39.040392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.876 [2024-11-19 12:41:39.040420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.876 [2024-11-19 12:41:39.044316] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.876 [2024-11-19 12:41:39.044350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.876 [2024-11-19 12:41:39.044378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.876 [2024-11-19 12:41:39.048355] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.876 [2024-11-19 12:41:39.048388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.876 [2024-11-19 12:41:39.048416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.876 [2024-11-19 12:41:39.052347] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.876 [2024-11-19 12:41:39.052381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.876 [2024-11-19 12:41:39.052410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.876 [2024-11-19 12:41:39.056458] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.876 [2024-11-19 12:41:39.056493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.876 [2024-11-19 12:41:39.056522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.876 [2024-11-19 12:41:39.060492] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.876 [2024-11-19 12:41:39.060526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.876 [2024-11-19 12:41:39.060555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.876 [2024-11-19 12:41:39.064516] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.876 [2024-11-19 12:41:39.064550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.876 [2024-11-19 12:41:39.064578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.876 [2024-11-19 12:41:39.068467] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.876 [2024-11-19 12:41:39.068501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.876 [2024-11-19 12:41:39.068529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.876 [2024-11-19 12:41:39.072456] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.876 [2024-11-19 12:41:39.072490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.876 [2024-11-19 12:41:39.072518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.876 [2024-11-19 12:41:39.076409] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.876 [2024-11-19 12:41:39.076442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.876 [2024-11-19 12:41:39.076470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.876 [2024-11-19 12:41:39.080407] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.876 [2024-11-19 12:41:39.080441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.876 [2024-11-19 12:41:39.080469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.876 [2024-11-19 12:41:39.084384] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.876 [2024-11-19 12:41:39.084417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.876 [2024-11-19 12:41:39.084446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.876 [2024-11-19 12:41:39.088303] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.876 [2024-11-19 12:41:39.088337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.876 [2024-11-19 12:41:39.088365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.876 [2024-11-19 12:41:39.092309] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.876 [2024-11-19 12:41:39.092342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.876 [2024-11-19 12:41:39.092370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.876 [2024-11-19 12:41:39.096319] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.877 [2024-11-19 12:41:39.096352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.877 [2024-11-19 12:41:39.096381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.877 [2024-11-19 12:41:39.100337] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.877 [2024-11-19 12:41:39.100371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.877 [2024-11-19 12:41:39.100400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.877 [2024-11-19 12:41:39.104441] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.877 [2024-11-19 12:41:39.104476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.877 [2024-11-19 12:41:39.104505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.877 [2024-11-19 12:41:39.108507] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.877 [2024-11-19 12:41:39.108540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.877 [2024-11-19 12:41:39.108568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.877 [2024-11-19 12:41:39.112542] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.877 [2024-11-19 12:41:39.112576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.877 [2024-11-19 12:41:39.112605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.877 [2024-11-19 12:41:39.116477] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.877 [2024-11-19 12:41:39.116510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.877 [2024-11-19 12:41:39.116539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.877 [2024-11-19 12:41:39.120593] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.877 [2024-11-19 12:41:39.120626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.877 [2024-11-19 12:41:39.120654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.877 [2024-11-19 12:41:39.124497] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.877 [2024-11-19 12:41:39.124529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.877 [2024-11-19 12:41:39.124557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.877 [2024-11-19 12:41:39.128915] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:33.877 [2024-11-19 12:41:39.128949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.877 [2024-11-19 12:41:39.128978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.148 [2024-11-19 12:41:39.133236] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.148 [2024-11-19 12:41:39.133268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.148 [2024-11-19 12:41:39.133297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.148 [2024-11-19 12:41:39.137536] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.148 [2024-11-19 12:41:39.137571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.148 [2024-11-19 12:41:39.137599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.148 [2024-11-19 12:41:39.141838] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.148 [2024-11-19 12:41:39.141872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.148 [2024-11-19 12:41:39.141901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.148 [2024-11-19 12:41:39.145770] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.148 [2024-11-19 12:41:39.145804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.148 [2024-11-19 12:41:39.145832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.148 [2024-11-19 12:41:39.149719] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.148 [2024-11-19 12:41:39.149752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.148 [2024-11-19 12:41:39.149781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.148 [2024-11-19 12:41:39.153700] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.148 [2024-11-19 12:41:39.153742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.149 [2024-11-19 12:41:39.153771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.149 [2024-11-19 12:41:39.157564] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.149 [2024-11-19 12:41:39.157597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.149 [2024-11-19 12:41:39.157626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.149 [2024-11-19 12:41:39.161642] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.149 [2024-11-19 12:41:39.161720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.149 [2024-11-19 12:41:39.161734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.149 [2024-11-19 12:41:39.165574] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.149 [2024-11-19 12:41:39.165609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.149 [2024-11-19 12:41:39.165636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.149 [2024-11-19 12:41:39.169461] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.149 [2024-11-19 12:41:39.169495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.149 [2024-11-19 12:41:39.169523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.149 [2024-11-19 12:41:39.173347] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.149 [2024-11-19 12:41:39.173380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.149 [2024-11-19 12:41:39.173409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.150 [2024-11-19 12:41:39.177322] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.150 [2024-11-19 12:41:39.177356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.150 [2024-11-19 12:41:39.177384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.150 [2024-11-19 12:41:39.181227] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.150 [2024-11-19 12:41:39.181260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.150 [2024-11-19 12:41:39.181289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.150 [2024-11-19 12:41:39.185137] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.150 [2024-11-19 12:41:39.185171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.150 [2024-11-19 12:41:39.185200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.150 [2024-11-19 12:41:39.189069] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.150 [2024-11-19 12:41:39.189102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.150 [2024-11-19 12:41:39.189130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.150 [2024-11-19 12:41:39.193074] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.150 [2024-11-19 12:41:39.193107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.150 [2024-11-19 12:41:39.193136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.151 [2024-11-19 12:41:39.197059] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.151 [2024-11-19 12:41:39.197092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.151 [2024-11-19 12:41:39.197120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.151 [2024-11-19 12:41:39.200943] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.151 [2024-11-19 12:41:39.200976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.151 [2024-11-19 12:41:39.201004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.151 [2024-11-19 12:41:39.204891] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.151 [2024-11-19 12:41:39.204923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.151 [2024-11-19 12:41:39.204951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.151 [2024-11-19 12:41:39.208882] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.151 [2024-11-19 12:41:39.208914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.151 [2024-11-19 12:41:39.208942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.151 [2024-11-19 12:41:39.212804] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.151 [2024-11-19 12:41:39.212837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.151 [2024-11-19 12:41:39.212865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.152 [2024-11-19 12:41:39.216679] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.152 [2024-11-19 12:41:39.216711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.152 [2024-11-19 12:41:39.216739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.152 [2024-11-19 12:41:39.220662] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.152 [2024-11-19 12:41:39.220705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.152 [2024-11-19 12:41:39.220733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.152 [2024-11-19 12:41:39.224505] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.152 [2024-11-19 12:41:39.224539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.152 [2024-11-19 12:41:39.224567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.152 [2024-11-19 12:41:39.228433] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.152 [2024-11-19 12:41:39.228467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.152 [2024-11-19 12:41:39.228495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.152 [2024-11-19 12:41:39.232401] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.152 [2024-11-19 12:41:39.232434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.152 [2024-11-19 12:41:39.232463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.152 [2024-11-19 12:41:39.236374] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.152 [2024-11-19 12:41:39.236407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.152 [2024-11-19 12:41:39.236438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.152 [2024-11-19 12:41:39.240352] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.152 [2024-11-19 12:41:39.240386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.152 [2024-11-19 12:41:39.240414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.152 [2024-11-19 12:41:39.244330] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.152 [2024-11-19 12:41:39.244365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.152 [2024-11-19 12:41:39.244393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.152 [2024-11-19 12:41:39.248354] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.152 [2024-11-19 12:41:39.248387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.152 [2024-11-19 12:41:39.248416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.152 [2024-11-19 12:41:39.252324] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.152 [2024-11-19 12:41:39.252357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.152 [2024-11-19 12:41:39.252386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.152 [2024-11-19 12:41:39.256273] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.152 [2024-11-19 12:41:39.256307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.152 [2024-11-19 12:41:39.256335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.152 [2024-11-19 12:41:39.260309] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.152 [2024-11-19 12:41:39.260343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.152 [2024-11-19 12:41:39.260371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.152 [2024-11-19 12:41:39.264317] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.152 [2024-11-19 12:41:39.264352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.152 [2024-11-19 12:41:39.264380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.152 [2024-11-19 12:41:39.268281] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.152 [2024-11-19 12:41:39.268316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.152 [2024-11-19 12:41:39.268344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.152 [2024-11-19 12:41:39.272234] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.152 [2024-11-19 12:41:39.272269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.152 [2024-11-19 12:41:39.272298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.152 [2024-11-19 12:41:39.276206] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.152 [2024-11-19 12:41:39.276239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.152 [2024-11-19 12:41:39.276268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.152 [2024-11-19 12:41:39.280132] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.152 [2024-11-19 12:41:39.280166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.152 [2024-11-19 12:41:39.280195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.152 [2024-11-19 12:41:39.284048] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.152 [2024-11-19 12:41:39.284080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.152 [2024-11-19 12:41:39.284108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.152 [2024-11-19 12:41:39.288178] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.152 [2024-11-19 12:41:39.288212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.152 [2024-11-19 12:41:39.288224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.152 [2024-11-19 12:41:39.292466] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.152 [2024-11-19 12:41:39.292502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.152 [2024-11-19 12:41:39.292531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.152 [2024-11-19 12:41:39.296843] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.152 [2024-11-19 12:41:39.296879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.152 [2024-11-19 12:41:39.296891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.152 [2024-11-19 12:41:39.300971] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.153 [2024-11-19 12:41:39.301002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.153 [2024-11-19 12:41:39.301015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.153 [2024-11-19 12:41:39.305650] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.153 [2024-11-19 12:41:39.305729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.153 [2024-11-19 12:41:39.305745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.153 [2024-11-19 12:41:39.310228] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.153 [2024-11-19 12:41:39.310262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.153 [2024-11-19 12:41:39.310275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.153 [2024-11-19 12:41:39.314644] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.153 [2024-11-19 12:41:39.314738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.153 [2024-11-19 12:41:39.314753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.153 [2024-11-19 12:41:39.319176] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.153 [2024-11-19 12:41:39.319211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.153 [2024-11-19 12:41:39.319224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.153 [2024-11-19 12:41:39.323665] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.153 [2024-11-19 12:41:39.323775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.153 [2024-11-19 12:41:39.323790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.153 [2024-11-19 12:41:39.328151] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.153 [2024-11-19 12:41:39.328186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.153 [2024-11-19 12:41:39.328198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.153 [2024-11-19 12:41:39.332304] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.153 [2024-11-19 12:41:39.332339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.153 [2024-11-19 12:41:39.332351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.153 [2024-11-19 12:41:39.336551] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.153 [2024-11-19 12:41:39.336591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.153 [2024-11-19 12:41:39.336605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.153 [2024-11-19 12:41:39.340803] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.153 [2024-11-19 12:41:39.340837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.153 [2024-11-19 12:41:39.340850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.153 [2024-11-19 12:41:39.344825] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.153 [2024-11-19 12:41:39.344859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.153 [2024-11-19 12:41:39.344871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.153 [2024-11-19 12:41:39.348860] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.153 [2024-11-19 12:41:39.348894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.153 [2024-11-19 12:41:39.348906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.153 [2024-11-19 12:41:39.352902] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.153 [2024-11-19 12:41:39.352936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.153 [2024-11-19 12:41:39.352949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.153 [2024-11-19 12:41:39.357146] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.153 [2024-11-19 12:41:39.357181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.153 [2024-11-19 12:41:39.357193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.153 [2024-11-19 12:41:39.361213] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.153 [2024-11-19 12:41:39.361248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.153 [2024-11-19 12:41:39.361260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.153 [2024-11-19 12:41:39.365215] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.153 [2024-11-19 12:41:39.365250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.153 [2024-11-19 12:41:39.365262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.153 [2024-11-19 12:41:39.369239] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.153 [2024-11-19 12:41:39.369274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.153 [2024-11-19 12:41:39.369286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.153 [2024-11-19 12:41:39.373337] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.153 [2024-11-19 12:41:39.373371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.153 [2024-11-19 12:41:39.373384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.153 [2024-11-19 12:41:39.377515] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.153 [2024-11-19 12:41:39.377552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.153 [2024-11-19 12:41:39.377579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.153 [2024-11-19 12:41:39.381629] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.153 [2024-11-19 12:41:39.381677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.153 [2024-11-19 12:41:39.381707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.153 [2024-11-19 12:41:39.385716] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.153 [2024-11-19 12:41:39.385750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.153 [2024-11-19 12:41:39.385762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.153 [2024-11-19 12:41:39.389655] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.153 [2024-11-19 12:41:39.389716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.153 [2024-11-19 12:41:39.389729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.153 [2024-11-19 12:41:39.393944] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.153 [2024-11-19 12:41:39.393978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.153 [2024-11-19 12:41:39.393990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.153 [2024-11-19 12:41:39.398499] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.153 [2024-11-19 12:41:39.398535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.153 [2024-11-19 12:41:39.398547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.153 [2024-11-19 12:41:39.402592] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.153 [2024-11-19 12:41:39.402629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.153 [2024-11-19 12:41:39.402658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.414 [2024-11-19 12:41:39.407027] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.414 [2024-11-19 12:41:39.407060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.414 [2024-11-19 12:41:39.407072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.414 [2024-11-19 12:41:39.411068] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.414 [2024-11-19 12:41:39.411102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.414 [2024-11-19 12:41:39.411115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.414 [2024-11-19 12:41:39.415501] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.414 [2024-11-19 12:41:39.415539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.414 [2024-11-19 12:41:39.415553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.414 [2024-11-19 12:41:39.419880] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.414 [2024-11-19 12:41:39.419914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.414 [2024-11-19 12:41:39.419926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.414 [2024-11-19 12:41:39.423924] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.414 [2024-11-19 12:41:39.423958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.414 [2024-11-19 12:41:39.423971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.414 [2024-11-19 12:41:39.427937] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.414 [2024-11-19 12:41:39.427971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.414 [2024-11-19 12:41:39.427983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.414 [2024-11-19 12:41:39.431974] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.414 [2024-11-19 12:41:39.432007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.414 [2024-11-19 12:41:39.432019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.414 [2024-11-19 12:41:39.435937] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.414 [2024-11-19 12:41:39.435970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.414 [2024-11-19 12:41:39.435982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.414 [2024-11-19 12:41:39.440237] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.414 [2024-11-19 12:41:39.440272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.414 [2024-11-19 12:41:39.440285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.414 [2024-11-19 12:41:39.444237] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.414 [2024-11-19 12:41:39.444272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.414 [2024-11-19 12:41:39.444284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.414 [2024-11-19 12:41:39.448247] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.414 [2024-11-19 12:41:39.448282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.414 [2024-11-19 12:41:39.448294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.414 [2024-11-19 12:41:39.452314] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.414 [2024-11-19 12:41:39.452348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.414 [2024-11-19 12:41:39.452360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.414 [2024-11-19 12:41:39.456358] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.414 [2024-11-19 12:41:39.456393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.414 [2024-11-19 12:41:39.456405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.414 [2024-11-19 12:41:39.460590] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.414 [2024-11-19 12:41:39.460625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.414 [2024-11-19 12:41:39.460637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.414 [2024-11-19 12:41:39.464708] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.414 [2024-11-19 12:41:39.464742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.414 [2024-11-19 12:41:39.464754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.414 [2024-11-19 12:41:39.468683] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.414 [2024-11-19 12:41:39.468716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.414 [2024-11-19 12:41:39.468728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.414 [2024-11-19 12:41:39.472619] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.414 [2024-11-19 12:41:39.472654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.414 [2024-11-19 12:41:39.472696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.414 [2024-11-19 12:41:39.476629] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.414 [2024-11-19 12:41:39.476675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.414 [2024-11-19 12:41:39.476705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.414 [2024-11-19 12:41:39.480840] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.414 [2024-11-19 12:41:39.480874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.414 [2024-11-19 12:41:39.480886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.414 [2024-11-19 12:41:39.484839] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.414 [2024-11-19 12:41:39.484873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.414 [2024-11-19 12:41:39.484885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.414 [2024-11-19 12:41:39.488801] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.414 [2024-11-19 12:41:39.488835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.414 [2024-11-19 12:41:39.488848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.414 [2024-11-19 12:41:39.492786] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.414 [2024-11-19 12:41:39.492820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.414 [2024-11-19 12:41:39.492832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.414 [2024-11-19 12:41:39.496828] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.414 [2024-11-19 12:41:39.496862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.414 [2024-11-19 12:41:39.496874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.414 [2024-11-19 12:41:39.501130] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.414 [2024-11-19 12:41:39.501165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.414 [2024-11-19 12:41:39.501177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.414 [2024-11-19 12:41:39.505176] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.414 [2024-11-19 12:41:39.505211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.414 [2024-11-19 12:41:39.505223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.414 [2024-11-19 12:41:39.509183] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.414 [2024-11-19 12:41:39.509218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.415 [2024-11-19 12:41:39.509231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.415 [2024-11-19 12:41:39.513238] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.415 [2024-11-19 12:41:39.513273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.415 [2024-11-19 12:41:39.513285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.415 [2024-11-19 12:41:39.517442] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.415 [2024-11-19 12:41:39.517495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.415 [2024-11-19 12:41:39.517507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.415 [2024-11-19 12:41:39.521550] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.415 [2024-11-19 12:41:39.521585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.415 [2024-11-19 12:41:39.521615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.415 [2024-11-19 12:41:39.525652] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.415 [2024-11-19 12:41:39.525731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.415 [2024-11-19 12:41:39.525745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.415 [2024-11-19 12:41:39.529739] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.415 [2024-11-19 12:41:39.529775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.415 [2024-11-19 12:41:39.529806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.415 [2024-11-19 12:41:39.533866] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.415 [2024-11-19 12:41:39.533901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.415 [2024-11-19 12:41:39.533931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.415 [2024-11-19 12:41:39.537903] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.415 [2024-11-19 12:41:39.537937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.415 [2024-11-19 12:41:39.537968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.415 [2024-11-19 12:41:39.541945] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.415 [2024-11-19 12:41:39.541980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.415 [2024-11-19 12:41:39.542010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.415 [2024-11-19 12:41:39.545840] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.415 [2024-11-19 12:41:39.545874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.415 [2024-11-19 12:41:39.545904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.415 [2024-11-19 12:41:39.549728] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.415 [2024-11-19 12:41:39.549761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.415 [2024-11-19 12:41:39.549791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.415 [2024-11-19 12:41:39.553703] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.415 [2024-11-19 12:41:39.553737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.415 [2024-11-19 12:41:39.553767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.415 [2024-11-19 12:41:39.557545] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.415 [2024-11-19 12:41:39.557578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.415 [2024-11-19 12:41:39.557608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.415 [2024-11-19 12:41:39.561592] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.415 [2024-11-19 12:41:39.561627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.415 [2024-11-19 12:41:39.561656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.415 [2024-11-19 12:41:39.565560] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.415 [2024-11-19 12:41:39.565594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.415 [2024-11-19 12:41:39.565623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.415 [2024-11-19 12:41:39.569526] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.415 [2024-11-19 12:41:39.569560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.415 [2024-11-19 12:41:39.569589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.415 [2024-11-19 12:41:39.573496] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.415 [2024-11-19 12:41:39.573529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.415 [2024-11-19 12:41:39.573558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.415 [2024-11-19 12:41:39.577505] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.415 [2024-11-19 12:41:39.577541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.415 [2024-11-19 12:41:39.577570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.415 [2024-11-19 12:41:39.581455] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.415 [2024-11-19 12:41:39.581489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.415 [2024-11-19 12:41:39.581519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.415 [2024-11-19 12:41:39.585472] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.415 [2024-11-19 12:41:39.585505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.415 [2024-11-19 12:41:39.585534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.415 [2024-11-19 12:41:39.589411] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.415 [2024-11-19 12:41:39.589445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.415 [2024-11-19 12:41:39.589474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.415 [2024-11-19 12:41:39.593481] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.415 [2024-11-19 12:41:39.593515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.415 [2024-11-19 12:41:39.593544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.415 [2024-11-19 12:41:39.597502] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.415 [2024-11-19 12:41:39.597536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.415 [2024-11-19 12:41:39.597565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.415 [2024-11-19 12:41:39.601437] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.415 [2024-11-19 12:41:39.601470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.415 [2024-11-19 12:41:39.601498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.415 [2024-11-19 12:41:39.605429] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.415 [2024-11-19 12:41:39.605463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.415 [2024-11-19 12:41:39.605491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.415 [2024-11-19 12:41:39.609340] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.415 [2024-11-19 12:41:39.609373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.415 [2024-11-19 12:41:39.609401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.415 [2024-11-19 12:41:39.613359] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.415 [2024-11-19 12:41:39.613393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.415 [2024-11-19 12:41:39.613421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.415 [2024-11-19 12:41:39.617399] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.416 [2024-11-19 12:41:39.617433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.416 [2024-11-19 12:41:39.617461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.416 [2024-11-19 12:41:39.621440] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.416 [2024-11-19 12:41:39.621474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.416 [2024-11-19 12:41:39.621502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.416 [2024-11-19 12:41:39.625357] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.416 [2024-11-19 12:41:39.625391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.416 [2024-11-19 12:41:39.625419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.416 [2024-11-19 12:41:39.629341] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.416 [2024-11-19 12:41:39.629374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.416 [2024-11-19 12:41:39.629401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.416 [2024-11-19 12:41:39.633282] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.416 [2024-11-19 12:41:39.633315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.416 [2024-11-19 12:41:39.633344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.416 [2024-11-19 12:41:39.637307] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.416 [2024-11-19 12:41:39.637340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.416 [2024-11-19 12:41:39.637370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.416 [2024-11-19 12:41:39.641325] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.416 [2024-11-19 12:41:39.641359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.416 [2024-11-19 12:41:39.641388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.416 [2024-11-19 12:41:39.645263] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.416 [2024-11-19 12:41:39.645297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.416 [2024-11-19 12:41:39.645325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.416 [2024-11-19 12:41:39.649345] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.416 [2024-11-19 12:41:39.649379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.416 [2024-11-19 12:41:39.649407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.416 [2024-11-19 12:41:39.653380] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.416 [2024-11-19 12:41:39.653414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.416 [2024-11-19 12:41:39.653442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.416 [2024-11-19 12:41:39.657360] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.416 [2024-11-19 12:41:39.657393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.416 [2024-11-19 12:41:39.657421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.416 [2024-11-19 12:41:39.661338] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.416 [2024-11-19 12:41:39.661371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.416 [2024-11-19 12:41:39.661399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.416 [2024-11-19 12:41:39.665374] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.416 [2024-11-19 12:41:39.665408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.416 [2024-11-19 12:41:39.665436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.677 [2024-11-19 12:41:39.669895] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.677 [2024-11-19 12:41:39.669931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.677 [2024-11-19 12:41:39.669960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.677 [2024-11-19 12:41:39.674122] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.677 [2024-11-19 12:41:39.674156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.677 [2024-11-19 12:41:39.674185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.677 [2024-11-19 12:41:39.678451] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.677 [2024-11-19 12:41:39.678485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.677 [2024-11-19 12:41:39.678513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.677 [2024-11-19 12:41:39.682562] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.677 [2024-11-19 12:41:39.682598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.677 [2024-11-19 12:41:39.682627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.677 [2024-11-19 12:41:39.686519] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.677 [2024-11-19 12:41:39.686552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.677 [2024-11-19 12:41:39.686581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.677 [2024-11-19 12:41:39.690464] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.677 [2024-11-19 12:41:39.690498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.677 [2024-11-19 12:41:39.690526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.677 [2024-11-19 12:41:39.694577] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.677 [2024-11-19 12:41:39.694612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.677 [2024-11-19 12:41:39.694641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.677 [2024-11-19 12:41:39.698544] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.677 [2024-11-19 12:41:39.698577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.677 [2024-11-19 12:41:39.698605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.677 [2024-11-19 12:41:39.702498] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.677 [2024-11-19 12:41:39.702532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.677 [2024-11-19 12:41:39.702561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.677 [2024-11-19 12:41:39.706637] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.677 [2024-11-19 12:41:39.706700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.677 [2024-11-19 12:41:39.706714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.677 [2024-11-19 12:41:39.710563] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.677 [2024-11-19 12:41:39.710597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.677 [2024-11-19 12:41:39.710625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.677 [2024-11-19 12:41:39.714551] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.677 [2024-11-19 12:41:39.714585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.677 [2024-11-19 12:41:39.714614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.677 [2024-11-19 12:41:39.718674] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.677 [2024-11-19 12:41:39.718741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.677 [2024-11-19 12:41:39.718754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.677 [2024-11-19 12:41:39.722624] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.677 [2024-11-19 12:41:39.722658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.677 [2024-11-19 12:41:39.722698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.677 [2024-11-19 12:41:39.726674] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.677 [2024-11-19 12:41:39.726719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.677 [2024-11-19 12:41:39.726748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.677 [2024-11-19 12:41:39.730635] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.678 [2024-11-19 12:41:39.730699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.678 [2024-11-19 12:41:39.730713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.678 [2024-11-19 12:41:39.734783] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.678 [2024-11-19 12:41:39.734818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.678 [2024-11-19 12:41:39.734847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.678 [2024-11-19 12:41:39.738908] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.678 [2024-11-19 12:41:39.738943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.678 [2024-11-19 12:41:39.738972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.678 [2024-11-19 12:41:39.742979] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.678 [2024-11-19 12:41:39.743013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.678 [2024-11-19 12:41:39.743041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.678 [2024-11-19 12:41:39.747010] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.678 [2024-11-19 12:41:39.747044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.678 [2024-11-19 12:41:39.747072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.678 [2024-11-19 12:41:39.751145] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.678 [2024-11-19 12:41:39.751180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.678 [2024-11-19 12:41:39.751209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.678 [2024-11-19 12:41:39.755334] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.678 [2024-11-19 12:41:39.755373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.678 [2024-11-19 12:41:39.755387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.678 [2024-11-19 12:41:39.759340] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.678 [2024-11-19 12:41:39.759379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.678 [2024-11-19 12:41:39.759408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.678 [2024-11-19 12:41:39.763370] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.678 [2024-11-19 12:41:39.763408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.678 [2024-11-19 12:41:39.763438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.678 [2024-11-19 12:41:39.767360] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.678 [2024-11-19 12:41:39.767398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.678 [2024-11-19 12:41:39.767426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.678 [2024-11-19 12:41:39.771367] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.678 [2024-11-19 12:41:39.771406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.678 [2024-11-19 12:41:39.771420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.678 [2024-11-19 12:41:39.775278] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.678 [2024-11-19 12:41:39.775339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.678 [2024-11-19 12:41:39.775353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.678 [2024-11-19 12:41:39.779289] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.678 [2024-11-19 12:41:39.779350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.678 [2024-11-19 12:41:39.779380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.678 [2024-11-19 12:41:39.783235] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.678 [2024-11-19 12:41:39.783268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.678 [2024-11-19 12:41:39.783321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.678 [2024-11-19 12:41:39.787436] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.678 [2024-11-19 12:41:39.787476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.678 [2024-11-19 12:41:39.787506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.678 [2024-11-19 12:41:39.791410] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.678 [2024-11-19 12:41:39.791448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.678 [2024-11-19 12:41:39.791462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.678 [2024-11-19 12:41:39.795381] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.678 [2024-11-19 12:41:39.795418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.678 [2024-11-19 12:41:39.795448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.678 [2024-11-19 12:41:39.799324] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.678 [2024-11-19 12:41:39.799361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.678 [2024-11-19 12:41:39.799391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.678 [2024-11-19 12:41:39.803375] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.678 [2024-11-19 12:41:39.803413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.678 [2024-11-19 12:41:39.803443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.678 [2024-11-19 12:41:39.807349] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.678 [2024-11-19 12:41:39.807385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.678 [2024-11-19 12:41:39.807415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.678 [2024-11-19 12:41:39.811277] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.678 [2024-11-19 12:41:39.811337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.678 [2024-11-19 12:41:39.811366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.678 [2024-11-19 12:41:39.815268] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.678 [2024-11-19 12:41:39.815326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.678 [2024-11-19 12:41:39.815355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.678 [2024-11-19 12:41:39.819225] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.678 [2024-11-19 12:41:39.819258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.678 [2024-11-19 12:41:39.819287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.678 [2024-11-19 12:41:39.823238] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.678 [2024-11-19 12:41:39.823272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.678 [2024-11-19 12:41:39.823325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.678 [2024-11-19 12:41:39.827183] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.678 [2024-11-19 12:41:39.827217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.678 [2024-11-19 12:41:39.827245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.678 [2024-11-19 12:41:39.831272] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.678 [2024-11-19 12:41:39.831348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.678 [2024-11-19 12:41:39.831362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.678 [2024-11-19 12:41:39.835248] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.678 [2024-11-19 12:41:39.835281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.678 [2024-11-19 12:41:39.835350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.679 [2024-11-19 12:41:39.839204] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.679 [2024-11-19 12:41:39.839237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.679 [2024-11-19 12:41:39.839265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.679 [2024-11-19 12:41:39.843253] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.679 [2024-11-19 12:41:39.843288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.679 [2024-11-19 12:41:39.843356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.679 [2024-11-19 12:41:39.847236] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.679 [2024-11-19 12:41:39.847270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.679 [2024-11-19 12:41:39.847322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.679 [2024-11-19 12:41:39.851198] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.679 [2024-11-19 12:41:39.851231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.679 [2024-11-19 12:41:39.851259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.679 [2024-11-19 12:41:39.855229] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.679 [2024-11-19 12:41:39.855264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.679 [2024-11-19 12:41:39.855301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.679 [2024-11-19 12:41:39.859185] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.679 [2024-11-19 12:41:39.859219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.679 [2024-11-19 12:41:39.859247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.679 [2024-11-19 12:41:39.863205] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.679 [2024-11-19 12:41:39.863239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.679 [2024-11-19 12:41:39.863268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.679 [2024-11-19 12:41:39.867235] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.679 [2024-11-19 12:41:39.867270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.679 [2024-11-19 12:41:39.867323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.679 [2024-11-19 12:41:39.871328] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.679 [2024-11-19 12:41:39.871365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.679 [2024-11-19 12:41:39.871379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.679 [2024-11-19 12:41:39.875202] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.679 [2024-11-19 12:41:39.875236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.679 [2024-11-19 12:41:39.875264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.679 [2024-11-19 12:41:39.879165] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.679 [2024-11-19 12:41:39.879199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.679 [2024-11-19 12:41:39.879228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.679 [2024-11-19 12:41:39.883162] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.679 [2024-11-19 12:41:39.883196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.679 [2024-11-19 12:41:39.883225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.679 [2024-11-19 12:41:39.887196] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.679 [2024-11-19 12:41:39.887231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.679 [2024-11-19 12:41:39.887260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.679 [2024-11-19 12:41:39.891266] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.679 [2024-11-19 12:41:39.891324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.679 [2024-11-19 12:41:39.891353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.679 [2024-11-19 12:41:39.895196] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.679 [2024-11-19 12:41:39.895230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.679 [2024-11-19 12:41:39.895257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.679 [2024-11-19 12:41:39.899246] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.679 [2024-11-19 12:41:39.899281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.679 [2024-11-19 12:41:39.899350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.679 [2024-11-19 12:41:39.903209] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.679 [2024-11-19 12:41:39.903244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.679 [2024-11-19 12:41:39.903273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.679 [2024-11-19 12:41:39.907148] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.679 [2024-11-19 12:41:39.907181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.679 [2024-11-19 12:41:39.907209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.679 [2024-11-19 12:41:39.911188] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.679 [2024-11-19 12:41:39.911222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.679 [2024-11-19 12:41:39.911251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.679 [2024-11-19 12:41:39.915200] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.679 [2024-11-19 12:41:39.915234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.679 [2024-11-19 12:41:39.915262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.679 [2024-11-19 12:41:39.919201] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.679 [2024-11-19 12:41:39.919235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.679 [2024-11-19 12:41:39.919263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.679 [2024-11-19 12:41:39.923525] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.679 [2024-11-19 12:41:39.923563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.679 [2024-11-19 12:41:39.923577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.679 [2024-11-19 12:41:39.927877] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b71220) 00:21:34.679 [2024-11-19 12:41:39.927913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.679 [2024-11-19 12:41:39.927925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.939 7626.00 IOPS, 953.25 MiB/s 00:21:34.940 Latency(us) 00:21:34.940 [2024-11-19T12:41:40.200Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.940 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:34.940 nvme0n1 : 2.00 7624.79 953.10 0.00 0.00 2095.60 1697.98 6464.23 00:21:34.940 [2024-11-19T12:41:40.200Z] =================================================================================================================== 00:21:34.940 [2024-11-19T12:41:40.200Z] Total : 7624.79 953.10 0.00 0.00 2095.60 1697.98 6464.23 00:21:34.940 { 00:21:34.940 "results": [ 00:21:34.940 { 00:21:34.940 "job": "nvme0n1", 00:21:34.940 "core_mask": "0x2", 00:21:34.940 "workload": "randread", 00:21:34.940 "status": "finished", 00:21:34.940 "queue_depth": 16, 00:21:34.940 "io_size": 131072, 00:21:34.940 "runtime": 2.002417, 00:21:34.940 "iops": 7624.785446787557, 00:21:34.940 "mibps": 953.0981808484446, 00:21:34.940 "io_failed": 0, 00:21:34.940 "io_timeout": 0, 00:21:34.940 "avg_latency_us": 2095.5978183723537, 00:21:34.940 "min_latency_us": 1697.9781818181818, 00:21:34.940 "max_latency_us": 6464.232727272727 00:21:34.940 } 00:21:34.940 ], 00:21:34.940 "core_count": 1 00:21:34.940 } 00:21:34.940 12:41:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:34.940 12:41:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:34.940 12:41:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:34.940 | .driver_specific 00:21:34.940 | .nvme_error 00:21:34.940 | .status_code 00:21:34.940 | .command_transient_transport_error' 00:21:34.940 12:41:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:34.940 12:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 492 > 0 )) 00:21:34.940 12:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95650 00:21:34.940 12:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 95650 ']' 00:21:34.940 12:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 95650 00:21:34.940 12:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:21:34.940 12:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:34.940 12:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95650 00:21:35.200 12:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:35.200 12:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:35.200 killing process with pid 95650 00:21:35.200 12:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95650' 00:21:35.200 Received shutdown signal, test time was about 2.000000 seconds 00:21:35.200 00:21:35.200 Latency(us) 00:21:35.200 [2024-11-19T12:41:40.460Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:35.200 [2024-11-19T12:41:40.460Z] =================================================================================================================== 00:21:35.200 [2024-11-19T12:41:40.460Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:35.200 12:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 95650 00:21:35.200 12:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 95650 00:21:35.200 12:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:21:35.200 12:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:35.200 12:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:21:35.200 12:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:21:35.200 12:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:21:35.200 12:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95705 00:21:35.200 12:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95705 /var/tmp/bperf.sock 00:21:35.200 12:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:21:35.200 12:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 95705 ']' 00:21:35.200 12:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:35.200 12:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:35.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:35.200 12:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:35.200 12:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:35.200 12:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:35.200 [2024-11-19 12:41:40.405440] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:21:35.200 [2024-11-19 12:41:40.405544] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95705 ] 00:21:35.460 [2024-11-19 12:41:40.538870] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.460 [2024-11-19 12:41:40.574380] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:35.460 [2024-11-19 12:41:40.603397] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:36.397 12:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:36.397 12:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:21:36.397 12:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:36.397 12:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:36.397 12:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:36.397 12:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.397 12:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:36.397 12:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.397 12:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:36.397 12:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:36.656 nvme0n1 00:21:36.656 12:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:36.656 12:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.656 12:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:36.656 12:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.656 12:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:36.656 12:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:36.916 Running I/O for 2 seconds... 00:21:36.916 [2024-11-19 12:41:42.024881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198fef90 00:21:36.916 [2024-11-19 12:41:42.027243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.916 [2024-11-19 12:41:42.027321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.916 [2024-11-19 12:41:42.039710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198feb58 00:21:36.916 [2024-11-19 12:41:42.042048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.916 [2024-11-19 12:41:42.042096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:36.916 [2024-11-19 12:41:42.053740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198fe2e8 00:21:36.916 [2024-11-19 12:41:42.056136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.916 [2024-11-19 12:41:42.056183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:36.916 [2024-11-19 12:41:42.067274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198fda78 00:21:36.916 [2024-11-19 12:41:42.069594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.916 [2024-11-19 12:41:42.069641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:36.916 [2024-11-19 12:41:42.081041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198fd208 00:21:36.916 [2024-11-19 12:41:42.083194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.916 [2024-11-19 12:41:42.083243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:36.916 [2024-11-19 12:41:42.094502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198fc998 00:21:36.916 [2024-11-19 12:41:42.096912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.917 [2024-11-19 12:41:42.096943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:36.917 [2024-11-19 12:41:42.108254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198fc128 00:21:36.917 [2024-11-19 12:41:42.110355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.917 [2024-11-19 12:41:42.110403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:36.917 [2024-11-19 12:41:42.121894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198fb8b8 00:21:36.917 [2024-11-19 12:41:42.124112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.917 [2024-11-19 12:41:42.124159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:36.917 [2024-11-19 12:41:42.135283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198fb048 00:21:36.917 [2024-11-19 12:41:42.137487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:9902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.917 [2024-11-19 12:41:42.137533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:36.917 [2024-11-19 12:41:42.148988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198fa7d8 00:21:36.917 [2024-11-19 12:41:42.151079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.917 [2024-11-19 12:41:42.151126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:36.917 [2024-11-19 12:41:42.162367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198f9f68 00:21:36.917 [2024-11-19 12:41:42.164534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:8457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.917 [2024-11-19 12:41:42.164580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:37.177 [2024-11-19 12:41:42.177358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198f96f8 00:21:37.177 [2024-11-19 12:41:42.179611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.177 [2024-11-19 12:41:42.179700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:37.177 [2024-11-19 12:41:42.190916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198f8e88 00:21:37.177 [2024-11-19 12:41:42.193146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.177 [2024-11-19 12:41:42.193192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:37.177 [2024-11-19 12:41:42.204642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198f8618 00:21:37.177 [2024-11-19 12:41:42.206709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.177 [2024-11-19 12:41:42.206757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:37.177 [2024-11-19 12:41:42.218117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198f7da8 00:21:37.177 [2024-11-19 12:41:42.220192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.177 [2024-11-19 12:41:42.220238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:37.177 [2024-11-19 12:41:42.231496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198f7538 00:21:37.177 [2024-11-19 12:41:42.233600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.177 [2024-11-19 12:41:42.233646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:37.177 [2024-11-19 12:41:42.245023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198f6cc8 00:21:37.177 [2024-11-19 12:41:42.246961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.177 [2024-11-19 12:41:42.247007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.177 [2024-11-19 12:41:42.258336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198f6458 00:21:37.177 [2024-11-19 12:41:42.260375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.177 [2024-11-19 12:41:42.260421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:37.177 [2024-11-19 12:41:42.271879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198f5be8 00:21:37.177 [2024-11-19 12:41:42.273932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.177 [2024-11-19 12:41:42.273979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:37.177 [2024-11-19 12:41:42.285583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198f5378 00:21:37.177 [2024-11-19 12:41:42.287554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.177 [2024-11-19 12:41:42.287587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:37.177 [2024-11-19 12:41:42.299092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198f4b08 00:21:37.177 [2024-11-19 12:41:42.301115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.177 [2024-11-19 12:41:42.301163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:37.177 [2024-11-19 12:41:42.312593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198f4298 00:21:37.177 [2024-11-19 12:41:42.314483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.178 [2024-11-19 12:41:42.314529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:37.178 [2024-11-19 12:41:42.326113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198f3a28 00:21:37.178 [2024-11-19 12:41:42.328101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.178 [2024-11-19 12:41:42.328147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:37.178 [2024-11-19 12:41:42.339453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198f31b8 00:21:37.178 [2024-11-19 12:41:42.341465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.178 [2024-11-19 12:41:42.341510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:37.178 [2024-11-19 12:41:42.353112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198f2948 00:21:37.178 [2024-11-19 12:41:42.354949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.178 [2024-11-19 12:41:42.354995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:37.178 [2024-11-19 12:41:42.366469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198f20d8 00:21:37.178 [2024-11-19 12:41:42.368462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.178 [2024-11-19 12:41:42.368509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:37.178 [2024-11-19 12:41:42.380109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198f1868 00:21:37.178 [2024-11-19 12:41:42.381920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.178 [2024-11-19 12:41:42.381966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:37.178 [2024-11-19 12:41:42.393562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198f0ff8 00:21:37.178 [2024-11-19 12:41:42.395457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.178 [2024-11-19 12:41:42.395491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:37.178 [2024-11-19 12:41:42.407112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198f0788 00:21:37.178 [2024-11-19 12:41:42.409031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.178 [2024-11-19 12:41:42.409063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:37.178 [2024-11-19 12:41:42.420579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198eff18 00:21:37.178 [2024-11-19 12:41:42.422388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.178 [2024-11-19 12:41:42.422434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:37.438 [2024-11-19 12:41:42.434927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198ef6a8 00:21:37.438 [2024-11-19 12:41:42.436860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.438 [2024-11-19 12:41:42.436892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:37.438 [2024-11-19 12:41:42.448918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198eee38 00:21:37.438 [2024-11-19 12:41:42.450627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.438 [2024-11-19 12:41:42.450695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:37.438 [2024-11-19 12:41:42.462382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198ee5c8 00:21:37.438 [2024-11-19 12:41:42.464261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.438 [2024-11-19 12:41:42.464305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:37.438 [2024-11-19 12:41:42.475991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198edd58 00:21:37.438 [2024-11-19 12:41:42.477729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:25265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.438 [2024-11-19 12:41:42.477775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:37.438 [2024-11-19 12:41:42.489574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198ed4e8 00:21:37.438 [2024-11-19 12:41:42.491261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.438 [2024-11-19 12:41:42.491330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:37.438 [2024-11-19 12:41:42.503129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198ecc78 00:21:37.438 [2024-11-19 12:41:42.504892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.438 [2024-11-19 12:41:42.504927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:37.438 [2024-11-19 12:41:42.516713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198ec408 00:21:37.438 [2024-11-19 12:41:42.518338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.438 [2024-11-19 12:41:42.518385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:37.438 [2024-11-19 12:41:42.530165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198ebb98 00:21:37.438 [2024-11-19 12:41:42.531917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.438 [2024-11-19 12:41:42.531963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:37.438 [2024-11-19 12:41:42.543738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198eb328 00:21:37.438 [2024-11-19 12:41:42.545401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.438 [2024-11-19 12:41:42.545448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:37.438 [2024-11-19 12:41:42.557337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198eaab8 00:21:37.438 [2024-11-19 12:41:42.558947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.438 [2024-11-19 12:41:42.558993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:37.438 [2024-11-19 12:41:42.570711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198ea248 00:21:37.438 [2024-11-19 12:41:42.572352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.438 [2024-11-19 12:41:42.572398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:37.438 [2024-11-19 12:41:42.584212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198e99d8 00:21:37.438 [2024-11-19 12:41:42.585784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.438 [2024-11-19 12:41:42.585830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:37.438 [2024-11-19 12:41:42.597733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198e9168 00:21:37.438 [2024-11-19 12:41:42.599364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.438 [2024-11-19 12:41:42.599397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:37.438 [2024-11-19 12:41:42.611220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198e88f8 00:21:37.438 [2024-11-19 12:41:42.612822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.438 [2024-11-19 12:41:42.612854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:37.438 [2024-11-19 12:41:42.624820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198e8088 00:21:37.438 [2024-11-19 12:41:42.626314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.438 [2024-11-19 12:41:42.626360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:37.438 [2024-11-19 12:41:42.638259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198e7818 00:21:37.438 [2024-11-19 12:41:42.639862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.438 [2024-11-19 12:41:42.639909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:37.438 [2024-11-19 12:41:42.651745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198e6fa8 00:21:37.438 [2024-11-19 12:41:42.653272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:14447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.438 [2024-11-19 12:41:42.653319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:37.438 [2024-11-19 12:41:42.665244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198e6738 00:21:37.438 [2024-11-19 12:41:42.666714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.438 [2024-11-19 12:41:42.666767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:37.438 [2024-11-19 12:41:42.678573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198e5ec8 00:21:37.438 [2024-11-19 12:41:42.680142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.438 [2024-11-19 12:41:42.680188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:37.438 [2024-11-19 12:41:42.692617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198e5658 00:21:37.439 [2024-11-19 12:41:42.694278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.439 [2024-11-19 12:41:42.694325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:37.698 [2024-11-19 12:41:42.707151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198e4de8 00:21:37.698 [2024-11-19 12:41:42.708669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.698 [2024-11-19 12:41:42.708727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:37.698 [2024-11-19 12:41:42.720788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198e4578 00:21:37.698 [2024-11-19 12:41:42.722208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.698 [2024-11-19 12:41:42.722256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:37.698 [2024-11-19 12:41:42.734293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198e3d08 00:21:37.698 [2024-11-19 12:41:42.735789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.698 [2024-11-19 12:41:42.735835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:37.698 [2024-11-19 12:41:42.747874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198e3498 00:21:37.698 [2024-11-19 12:41:42.749297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.698 [2024-11-19 12:41:42.749344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:37.698 [2024-11-19 12:41:42.761660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198e2c28 00:21:37.698 [2024-11-19 12:41:42.763031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.698 [2024-11-19 12:41:42.763077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:37.698 [2024-11-19 12:41:42.775362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198e23b8 00:21:37.698 [2024-11-19 12:41:42.776766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.698 [2024-11-19 12:41:42.776798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:37.698 [2024-11-19 12:41:42.788895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198e1b48 00:21:37.698 [2024-11-19 12:41:42.790206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.698 [2024-11-19 12:41:42.790253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:37.699 [2024-11-19 12:41:42.802425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198e12d8 00:21:37.699 [2024-11-19 12:41:42.803891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.699 [2024-11-19 12:41:42.803938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:37.699 [2024-11-19 12:41:42.816196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198e0a68 00:21:37.699 [2024-11-19 12:41:42.817545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.699 [2024-11-19 12:41:42.817592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:37.699 [2024-11-19 12:41:42.829873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198e01f8 00:21:37.699 [2024-11-19 12:41:42.831157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.699 [2024-11-19 12:41:42.831204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:37.699 [2024-11-19 12:41:42.843273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198df988 00:21:37.699 [2024-11-19 12:41:42.844587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.699 [2024-11-19 12:41:42.844633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:37.699 [2024-11-19 12:41:42.856728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198df118 00:21:37.699 [2024-11-19 12:41:42.857972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.699 [2024-11-19 12:41:42.858019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:37.699 [2024-11-19 12:41:42.870120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198de8a8 00:21:37.699 [2024-11-19 12:41:42.871521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.699 [2024-11-19 12:41:42.871557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:37.699 [2024-11-19 12:41:42.884208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198de038 00:21:37.699 [2024-11-19 12:41:42.885438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.699 [2024-11-19 12:41:42.885484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:37.699 [2024-11-19 12:41:42.903156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198de038 00:21:37.699 [2024-11-19 12:41:42.905486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.699 [2024-11-19 12:41:42.905533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.699 [2024-11-19 12:41:42.916808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198de8a8 00:21:37.699 [2024-11-19 12:41:42.918961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.699 [2024-11-19 12:41:42.919008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:37.699 [2024-11-19 12:41:42.931349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198df118 00:21:37.699 [2024-11-19 12:41:42.933702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.699 [2024-11-19 12:41:42.933733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:37.699 [2024-11-19 12:41:42.946755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198df988 00:21:37.699 [2024-11-19 12:41:42.949194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.699 [2024-11-19 12:41:42.949241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:37.959 [2024-11-19 12:41:42.963094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198e01f8 00:21:37.959 [2024-11-19 12:41:42.965517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.959 [2024-11-19 12:41:42.965565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:37.959 [2024-11-19 12:41:42.978210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198e0a68 00:21:37.959 [2024-11-19 12:41:42.980543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.959 [2024-11-19 12:41:42.980589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:37.959 [2024-11-19 12:41:42.992912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198e12d8 00:21:37.959 [2024-11-19 12:41:42.995277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.959 [2024-11-19 12:41:42.995349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:37.959 18345.00 IOPS, 71.66 MiB/s [2024-11-19T12:41:43.219Z] [2024-11-19 12:41:43.011061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198e1b48 00:21:37.959 [2024-11-19 12:41:43.013597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.959 [2024-11-19 12:41:43.013646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:37.959 [2024-11-19 12:41:43.027709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198e23b8 00:21:37.959 [2024-11-19 12:41:43.030181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.959 [2024-11-19 12:41:43.030230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:37.959 [2024-11-19 12:41:43.043360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198e2c28 00:21:37.959 [2024-11-19 12:41:43.045596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.959 [2024-11-19 12:41:43.045643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:37.959 [2024-11-19 12:41:43.057925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198e3498 00:21:37.959 [2024-11-19 12:41:43.060159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:3128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.959 [2024-11-19 12:41:43.060207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:37.959 [2024-11-19 12:41:43.072491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198e3d08 00:21:37.959 [2024-11-19 12:41:43.074710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.959 [2024-11-19 12:41:43.074765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:37.959 [2024-11-19 12:41:43.087250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198e4578 00:21:37.959 [2024-11-19 12:41:43.089402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.959 [2024-11-19 12:41:43.089450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:37.959 [2024-11-19 12:41:43.101955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198e4de8 00:21:37.959 [2024-11-19 12:41:43.104163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:13799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.959 [2024-11-19 12:41:43.104210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:37.959 [2024-11-19 12:41:43.117053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198e5658 00:21:37.959 [2024-11-19 12:41:43.119191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.959 [2024-11-19 12:41:43.119238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:37.959 [2024-11-19 12:41:43.131150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198e5ec8 00:21:37.959 [2024-11-19 12:41:43.133233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.959 [2024-11-19 12:41:43.133280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:37.959 [2024-11-19 12:41:43.144856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198e6738 00:21:37.959 [2024-11-19 12:41:43.146850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.959 [2024-11-19 12:41:43.146896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:37.959 [2024-11-19 12:41:43.158517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198e6fa8 00:21:37.959 [2024-11-19 12:41:43.160638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.959 [2024-11-19 12:41:43.160706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:37.959 [2024-11-19 12:41:43.172207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198e7818 00:21:37.959 [2024-11-19 12:41:43.174175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.959 [2024-11-19 12:41:43.174221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:37.959 [2024-11-19 12:41:43.185692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198e8088 00:21:37.959 [2024-11-19 12:41:43.187672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.959 [2024-11-19 12:41:43.187740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:37.959 [2024-11-19 12:41:43.199195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198e88f8 00:21:37.959 [2024-11-19 12:41:43.201278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:10039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.959 [2024-11-19 12:41:43.201339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:37.959 [2024-11-19 12:41:43.213104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198e9168 00:21:37.959 [2024-11-19 12:41:43.215253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.959 [2024-11-19 12:41:43.215320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:38.219 [2024-11-19 12:41:43.227817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198e99d8 00:21:38.219 [2024-11-19 12:41:43.229692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:10157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.219 [2024-11-19 12:41:43.229724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:38.219 [2024-11-19 12:41:43.241281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198ea248 00:21:38.219 [2024-11-19 12:41:43.243123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.219 [2024-11-19 12:41:43.243169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:38.219 [2024-11-19 12:41:43.254696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198eaab8 00:21:38.219 [2024-11-19 12:41:43.256618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.219 [2024-11-19 12:41:43.256688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:38.219 [2024-11-19 12:41:43.268259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198eb328 00:21:38.219 [2024-11-19 12:41:43.270119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.219 [2024-11-19 12:41:43.270165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:38.219 [2024-11-19 12:41:43.281797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198ebb98 00:21:38.219 [2024-11-19 12:41:43.283716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.219 [2024-11-19 12:41:43.283770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:38.219 [2024-11-19 12:41:43.295216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198ec408 00:21:38.219 [2024-11-19 12:41:43.297090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.219 [2024-11-19 12:41:43.297136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:38.219 [2024-11-19 12:41:43.308702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198ecc78 00:21:38.219 [2024-11-19 12:41:43.310509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.219 [2024-11-19 12:41:43.310555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:38.219 [2024-11-19 12:41:43.322372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198ed4e8 00:21:38.219 [2024-11-19 12:41:43.324397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.220 [2024-11-19 12:41:43.324445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:38.220 [2024-11-19 12:41:43.336063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198edd58 00:21:38.220 [2024-11-19 12:41:43.337793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.220 [2024-11-19 12:41:43.337839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:38.220 [2024-11-19 12:41:43.349522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198ee5c8 00:21:38.220 [2024-11-19 12:41:43.351386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.220 [2024-11-19 12:41:43.351416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:38.220 [2024-11-19 12:41:43.362926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198eee38 00:21:38.220 [2024-11-19 12:41:43.364656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.220 [2024-11-19 12:41:43.364710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:38.220 [2024-11-19 12:41:43.376297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198ef6a8 00:21:38.220 [2024-11-19 12:41:43.377986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.220 [2024-11-19 12:41:43.378032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:38.220 [2024-11-19 12:41:43.389719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198eff18 00:21:38.220 [2024-11-19 12:41:43.391486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.220 [2024-11-19 12:41:43.391519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:38.220 [2024-11-19 12:41:43.403155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198f0788 00:21:38.220 [2024-11-19 12:41:43.404847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.220 [2024-11-19 12:41:43.404893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:38.220 [2024-11-19 12:41:43.416608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198f0ff8 00:21:38.220 [2024-11-19 12:41:43.418251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.220 [2024-11-19 12:41:43.418296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:38.220 [2024-11-19 12:41:43.430212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198f1868 00:21:38.220 [2024-11-19 12:41:43.431958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.220 [2024-11-19 12:41:43.431991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:38.220 [2024-11-19 12:41:43.443545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198f20d8 00:21:38.220 [2024-11-19 12:41:43.445208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.220 [2024-11-19 12:41:43.445254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:38.220 [2024-11-19 12:41:43.456989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198f2948 00:21:38.220 [2024-11-19 12:41:43.458575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.220 [2024-11-19 12:41:43.458622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:38.220 [2024-11-19 12:41:43.470347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198f31b8 00:21:38.220 [2024-11-19 12:41:43.472172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.220 [2024-11-19 12:41:43.472220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:38.480 [2024-11-19 12:41:43.485133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198f3a28 00:21:38.480 [2024-11-19 12:41:43.486695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.480 [2024-11-19 12:41:43.486749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:38.480 [2024-11-19 12:41:43.498565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198f4298 00:21:38.480 [2024-11-19 12:41:43.500321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.480 [2024-11-19 12:41:43.500366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:38.480 [2024-11-19 12:41:43.512219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198f4b08 00:21:38.480 [2024-11-19 12:41:43.513757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.480 [2024-11-19 12:41:43.513802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:38.480 [2024-11-19 12:41:43.525702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198f5378 00:21:38.480 [2024-11-19 12:41:43.527214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:17960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.480 [2024-11-19 12:41:43.527261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:38.480 [2024-11-19 12:41:43.539063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198f5be8 00:21:38.480 [2024-11-19 12:41:43.540668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.480 [2024-11-19 12:41:43.540725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:38.480 [2024-11-19 12:41:43.552598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198f6458 00:21:38.480 [2024-11-19 12:41:43.554093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.480 [2024-11-19 12:41:43.554139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:38.480 [2024-11-19 12:41:43.565989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198f6cc8 00:21:38.480 [2024-11-19 12:41:43.567508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:24799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.480 [2024-11-19 12:41:43.567540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:38.480 [2024-11-19 12:41:43.579288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198f7538 00:21:38.480 [2024-11-19 12:41:43.580779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.480 [2024-11-19 12:41:43.580825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:38.480 [2024-11-19 12:41:43.592650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198f7da8 00:21:38.480 [2024-11-19 12:41:43.594098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.480 [2024-11-19 12:41:43.594144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:38.480 [2024-11-19 12:41:43.606156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198f8618 00:21:38.480 [2024-11-19 12:41:43.607702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.480 [2024-11-19 12:41:43.607756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:38.480 [2024-11-19 12:41:43.619777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198f8e88 00:21:38.480 [2024-11-19 12:41:43.621241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.480 [2024-11-19 12:41:43.621303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:38.480 [2024-11-19 12:41:43.633351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198f96f8 00:21:38.480 [2024-11-19 12:41:43.634772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.480 [2024-11-19 12:41:43.634818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:38.480 [2024-11-19 12:41:43.646838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198f9f68 00:21:38.480 [2024-11-19 12:41:43.648312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.480 [2024-11-19 12:41:43.648357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:38.480 [2024-11-19 12:41:43.660423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198fa7d8 00:21:38.480 [2024-11-19 12:41:43.661838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.480 [2024-11-19 12:41:43.661868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:38.480 [2024-11-19 12:41:43.673891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198fb048 00:21:38.480 [2024-11-19 12:41:43.675226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.480 [2024-11-19 12:41:43.675272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:38.480 [2024-11-19 12:41:43.687270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198fb8b8 00:21:38.480 [2024-11-19 12:41:43.688646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.480 [2024-11-19 12:41:43.688701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:38.480 [2024-11-19 12:41:43.700848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198fc128 00:21:38.480 [2024-11-19 12:41:43.702187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.480 [2024-11-19 12:41:43.702233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:38.480 [2024-11-19 12:41:43.714405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198fc998 00:21:38.480 [2024-11-19 12:41:43.715865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.480 [2024-11-19 12:41:43.715913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:38.480 [2024-11-19 12:41:43.728130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198fd208 00:21:38.480 [2024-11-19 12:41:43.729411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.480 [2024-11-19 12:41:43.729457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:38.741 [2024-11-19 12:41:43.743061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198fda78 00:21:38.741 [2024-11-19 12:41:43.744399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.741 [2024-11-19 12:41:43.744446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:38.741 [2024-11-19 12:41:43.756913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198fe2e8 00:21:38.741 [2024-11-19 12:41:43.758217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.741 [2024-11-19 12:41:43.758264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:38.741 [2024-11-19 12:41:43.770520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198feb58 00:21:38.741 [2024-11-19 12:41:43.771945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.741 [2024-11-19 12:41:43.771991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:38.741 [2024-11-19 12:41:43.789621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198fef90 00:21:38.741 [2024-11-19 12:41:43.791921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.741 [2024-11-19 12:41:43.791967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.741 [2024-11-19 12:41:43.803014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198feb58 00:21:38.741 [2024-11-19 12:41:43.805263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.741 [2024-11-19 12:41:43.805309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:38.741 [2024-11-19 12:41:43.816423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198fe2e8 00:21:38.741 [2024-11-19 12:41:43.818682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.741 [2024-11-19 12:41:43.818736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:38.741 [2024-11-19 12:41:43.829873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198fda78 00:21:38.741 [2024-11-19 12:41:43.832133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.741 [2024-11-19 12:41:43.832179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:38.741 [2024-11-19 12:41:43.843382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198fd208 00:21:38.741 [2024-11-19 12:41:43.845520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.741 [2024-11-19 12:41:43.845567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:38.741 [2024-11-19 12:41:43.856839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198fc998 00:21:38.741 [2024-11-19 12:41:43.859012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.741 [2024-11-19 12:41:43.859059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:38.741 [2024-11-19 12:41:43.870240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198fc128 00:21:38.741 [2024-11-19 12:41:43.872610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.741 [2024-11-19 12:41:43.872656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:38.741 [2024-11-19 12:41:43.883856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198fb8b8 00:21:38.741 [2024-11-19 12:41:43.885975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.741 [2024-11-19 12:41:43.886006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:38.741 [2024-11-19 12:41:43.897180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198fb048 00:21:38.741 [2024-11-19 12:41:43.899260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.741 [2024-11-19 12:41:43.899327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:38.741 [2024-11-19 12:41:43.910556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198fa7d8 00:21:38.741 [2024-11-19 12:41:43.912762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.741 [2024-11-19 12:41:43.912807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:38.741 [2024-11-19 12:41:43.924156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198f9f68 00:21:38.741 [2024-11-19 12:41:43.926253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.741 [2024-11-19 12:41:43.926298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:38.741 [2024-11-19 12:41:43.937548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198f96f8 00:21:38.741 [2024-11-19 12:41:43.939735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.741 [2024-11-19 12:41:43.939779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:38.741 [2024-11-19 12:41:43.951173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198f8e88 00:21:38.741 [2024-11-19 12:41:43.953283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.741 [2024-11-19 12:41:43.953329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:38.741 [2024-11-19 12:41:43.964563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198f8618 00:21:38.741 [2024-11-19 12:41:43.966655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.741 [2024-11-19 12:41:43.966707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:38.741 [2024-11-19 12:41:43.978016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198f7da8 00:21:38.741 [2024-11-19 12:41:43.980097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.741 [2024-11-19 12:41:43.980142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:38.741 [2024-11-19 12:41:43.991426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198f7538 00:21:38.741 [2024-11-19 12:41:43.993581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.741 [2024-11-19 12:41:43.993661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:39.001 [2024-11-19 12:41:44.006321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae430) with pdu=0x2000198f6cc8 00:21:39.001 18344.00 IOPS, 71.66 MiB/s [2024-11-19T12:41:44.261Z] [2024-11-19 12:41:44.008556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.001 [2024-11-19 12:41:44.008600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:39.001 00:21:39.001 Latency(us) 00:21:39.001 [2024-11-19T12:41:44.261Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.001 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:39.001 nvme0n1 : 2.00 18370.62 71.76 0.00 0.00 6961.25 3202.33 27405.96 00:21:39.001 [2024-11-19T12:41:44.261Z] =================================================================================================================== 00:21:39.001 [2024-11-19T12:41:44.261Z] Total : 18370.62 71.76 0.00 0.00 6961.25 3202.33 27405.96 00:21:39.001 { 00:21:39.001 "results": [ 00:21:39.001 { 00:21:39.001 "job": "nvme0n1", 00:21:39.001 "core_mask": "0x2", 00:21:39.001 "workload": "randwrite", 00:21:39.001 "status": "finished", 00:21:39.001 "queue_depth": 128, 00:21:39.001 "io_size": 4096, 00:21:39.001 "runtime": 2.00407, 00:21:39.001 "iops": 18370.615796853403, 00:21:39.001 "mibps": 71.7602179564586, 00:21:39.001 "io_failed": 0, 00:21:39.001 "io_timeout": 0, 00:21:39.001 "avg_latency_us": 6961.246951918139, 00:21:39.001 "min_latency_us": 3202.327272727273, 00:21:39.001 "max_latency_us": 27405.963636363635 00:21:39.001 } 00:21:39.001 ], 00:21:39.001 "core_count": 1 00:21:39.001 } 00:21:39.001 12:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:39.001 12:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:39.001 12:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:39.001 | .driver_specific 00:21:39.001 | .nvme_error 00:21:39.001 | .status_code 00:21:39.001 | .command_transient_transport_error' 00:21:39.001 12:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:39.260 12:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 144 > 0 )) 00:21:39.261 12:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95705 00:21:39.261 12:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 95705 ']' 00:21:39.261 12:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 95705 00:21:39.261 12:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:21:39.261 12:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:39.261 12:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95705 00:21:39.261 killing process with pid 95705 00:21:39.261 Received shutdown signal, test time was about 2.000000 seconds 00:21:39.261 00:21:39.261 Latency(us) 00:21:39.261 [2024-11-19T12:41:44.521Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.261 [2024-11-19T12:41:44.521Z] =================================================================================================================== 00:21:39.261 [2024-11-19T12:41:44.521Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:39.261 12:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:39.261 12:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:39.261 12:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95705' 00:21:39.261 12:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 95705 00:21:39.261 12:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 95705 00:21:39.261 12:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:21:39.261 12:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:39.261 12:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:21:39.261 12:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:21:39.261 12:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:21:39.261 12:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:21:39.261 12:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95765 00:21:39.261 12:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95765 /var/tmp/bperf.sock 00:21:39.261 12:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 95765 ']' 00:21:39.261 12:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:39.261 12:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:39.261 12:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:39.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:39.261 12:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:39.261 12:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:39.520 [2024-11-19 12:41:44.537499] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:21:39.520 [2024-11-19 12:41:44.537603] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95765 ] 00:21:39.520 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:39.520 Zero copy mechanism will not be used. 00:21:39.520 [2024-11-19 12:41:44.666114] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.520 [2024-11-19 12:41:44.699098] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:39.520 [2024-11-19 12:41:44.726621] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:39.779 12:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:39.779 12:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:21:39.779 12:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:39.779 12:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:40.038 12:41:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:40.038 12:41:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.038 12:41:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:40.038 12:41:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.038 12:41:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:40.038 12:41:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:40.298 nvme0n1 00:21:40.298 12:41:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:40.298 12:41:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.298 12:41:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:40.298 12:41:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.298 12:41:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:40.298 12:41:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:40.298 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:40.298 Zero copy mechanism will not be used. 00:21:40.298 Running I/O for 2 seconds... 00:21:40.298 [2024-11-19 12:41:45.485026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.298 [2024-11-19 12:41:45.485351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.298 [2024-11-19 12:41:45.485381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.298 [2024-11-19 12:41:45.490075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.298 [2024-11-19 12:41:45.490362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.298 [2024-11-19 12:41:45.490405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.298 [2024-11-19 12:41:45.494957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.298 [2024-11-19 12:41:45.495234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.298 [2024-11-19 12:41:45.495277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.298 [2024-11-19 12:41:45.499906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.298 [2024-11-19 12:41:45.500193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.298 [2024-11-19 12:41:45.500219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.298 [2024-11-19 12:41:45.504949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.298 [2024-11-19 12:41:45.505232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.298 [2024-11-19 12:41:45.505261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.298 [2024-11-19 12:41:45.509805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.298 [2024-11-19 12:41:45.510080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.298 [2024-11-19 12:41:45.510105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.298 [2024-11-19 12:41:45.514537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.298 [2024-11-19 12:41:45.514844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.298 [2024-11-19 12:41:45.514875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.298 [2024-11-19 12:41:45.519877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.298 [2024-11-19 12:41:45.520187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.298 [2024-11-19 12:41:45.520213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.298 [2024-11-19 12:41:45.524796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.298 [2024-11-19 12:41:45.525085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.298 [2024-11-19 12:41:45.525109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.298 [2024-11-19 12:41:45.529606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.298 [2024-11-19 12:41:45.529920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.298 [2024-11-19 12:41:45.529965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.298 [2024-11-19 12:41:45.534239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.298 [2024-11-19 12:41:45.534510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.298 [2024-11-19 12:41:45.534536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.298 [2024-11-19 12:41:45.538859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.298 [2024-11-19 12:41:45.539129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.298 [2024-11-19 12:41:45.539162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.298 [2024-11-19 12:41:45.543773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.298 [2024-11-19 12:41:45.544029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.298 [2024-11-19 12:41:45.544055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.298 [2024-11-19 12:41:45.548579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.298 [2024-11-19 12:41:45.549055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.298 [2024-11-19 12:41:45.549087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.298 [2024-11-19 12:41:45.553980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.298 [2024-11-19 12:41:45.554328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.298 [2024-11-19 12:41:45.554353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.559 [2024-11-19 12:41:45.559575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.559 [2024-11-19 12:41:45.559954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.559 [2024-11-19 12:41:45.559984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.559 [2024-11-19 12:41:45.564356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.559 [2024-11-19 12:41:45.564826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.559 [2024-11-19 12:41:45.564860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.559 [2024-11-19 12:41:45.569311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.559 [2024-11-19 12:41:45.569580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.559 [2024-11-19 12:41:45.569605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.559 [2024-11-19 12:41:45.574092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.559 [2024-11-19 12:41:45.574368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.559 [2024-11-19 12:41:45.574394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.559 [2024-11-19 12:41:45.578798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.559 [2024-11-19 12:41:45.579067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.559 [2024-11-19 12:41:45.579092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.559 [2024-11-19 12:41:45.583494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.559 [2024-11-19 12:41:45.583862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.559 [2024-11-19 12:41:45.583892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.559 [2024-11-19 12:41:45.588268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.559 [2024-11-19 12:41:45.588722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.559 [2024-11-19 12:41:45.588763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.559 [2024-11-19 12:41:45.593197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.559 [2024-11-19 12:41:45.593466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.559 [2024-11-19 12:41:45.593491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.559 [2024-11-19 12:41:45.597862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.559 [2024-11-19 12:41:45.598130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.559 [2024-11-19 12:41:45.598155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.559 [2024-11-19 12:41:45.602520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.559 [2024-11-19 12:41:45.602802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.559 [2024-11-19 12:41:45.602827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.559 [2024-11-19 12:41:45.607191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.559 [2024-11-19 12:41:45.607505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.559 [2024-11-19 12:41:45.607531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.559 [2024-11-19 12:41:45.612077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.559 [2024-11-19 12:41:45.612344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.559 [2024-11-19 12:41:45.612397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.559 [2024-11-19 12:41:45.616727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.559 [2024-11-19 12:41:45.616997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.559 [2024-11-19 12:41:45.617023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.559 [2024-11-19 12:41:45.621374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.559 [2024-11-19 12:41:45.621636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.559 [2024-11-19 12:41:45.621660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.559 [2024-11-19 12:41:45.626052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.559 [2024-11-19 12:41:45.626323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.559 [2024-11-19 12:41:45.626365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.559 [2024-11-19 12:41:45.630750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.559 [2024-11-19 12:41:45.631013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.559 [2024-11-19 12:41:45.631038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.559 [2024-11-19 12:41:45.635359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.559 [2024-11-19 12:41:45.635709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.559 [2024-11-19 12:41:45.635742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.559 [2024-11-19 12:41:45.640088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.559 [2024-11-19 12:41:45.640351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.559 [2024-11-19 12:41:45.640376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.559 [2024-11-19 12:41:45.644748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.559 [2024-11-19 12:41:45.645019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.559 [2024-11-19 12:41:45.645044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.560 [2024-11-19 12:41:45.649395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.560 [2024-11-19 12:41:45.649658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.560 [2024-11-19 12:41:45.649693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.560 [2024-11-19 12:41:45.654011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.560 [2024-11-19 12:41:45.654274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.560 [2024-11-19 12:41:45.654299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.560 [2024-11-19 12:41:45.658723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.560 [2024-11-19 12:41:45.658985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.560 [2024-11-19 12:41:45.659010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.560 [2024-11-19 12:41:45.663435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.560 [2024-11-19 12:41:45.663774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.560 [2024-11-19 12:41:45.663799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.560 [2024-11-19 12:41:45.668101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.560 [2024-11-19 12:41:45.668368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.560 [2024-11-19 12:41:45.668393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.560 [2024-11-19 12:41:45.672740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.560 [2024-11-19 12:41:45.673012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.560 [2024-11-19 12:41:45.673037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.560 [2024-11-19 12:41:45.677346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.560 [2024-11-19 12:41:45.677609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.560 [2024-11-19 12:41:45.677634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.560 [2024-11-19 12:41:45.682090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.560 [2024-11-19 12:41:45.682360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.560 [2024-11-19 12:41:45.682386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.560 [2024-11-19 12:41:45.686698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.560 [2024-11-19 12:41:45.686961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.560 [2024-11-19 12:41:45.686985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.560 [2024-11-19 12:41:45.691460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.560 [2024-11-19 12:41:45.691793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.560 [2024-11-19 12:41:45.691817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.560 [2024-11-19 12:41:45.696137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.560 [2024-11-19 12:41:45.696401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.560 [2024-11-19 12:41:45.696426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.560 [2024-11-19 12:41:45.700832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.560 [2024-11-19 12:41:45.701115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.560 [2024-11-19 12:41:45.701140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.560 [2024-11-19 12:41:45.705493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.560 [2024-11-19 12:41:45.705772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.560 [2024-11-19 12:41:45.705792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.560 [2024-11-19 12:41:45.710213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.560 [2024-11-19 12:41:45.710495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.560 [2024-11-19 12:41:45.710537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.560 [2024-11-19 12:41:45.714957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.560 [2024-11-19 12:41:45.715224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.560 [2024-11-19 12:41:45.715248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.560 [2024-11-19 12:41:45.719619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.560 [2024-11-19 12:41:45.719985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.560 [2024-11-19 12:41:45.720010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.560 [2024-11-19 12:41:45.724362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.560 [2024-11-19 12:41:45.724628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.560 [2024-11-19 12:41:45.724653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.560 [2024-11-19 12:41:45.729186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.560 [2024-11-19 12:41:45.729450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.560 [2024-11-19 12:41:45.729474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.560 [2024-11-19 12:41:45.733936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.560 [2024-11-19 12:41:45.734188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.560 [2024-11-19 12:41:45.734212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.560 [2024-11-19 12:41:45.738418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.560 [2024-11-19 12:41:45.738694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.560 [2024-11-19 12:41:45.738713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.560 [2024-11-19 12:41:45.743223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.560 [2024-11-19 12:41:45.743562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.560 [2024-11-19 12:41:45.743589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.560 [2024-11-19 12:41:45.748018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.560 [2024-11-19 12:41:45.748281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.560 [2024-11-19 12:41:45.748306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.560 [2024-11-19 12:41:45.752863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.560 [2024-11-19 12:41:45.753174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.560 [2024-11-19 12:41:45.753201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.560 [2024-11-19 12:41:45.757707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.560 [2024-11-19 12:41:45.757977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.560 [2024-11-19 12:41:45.758003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.560 [2024-11-19 12:41:45.762530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.560 [2024-11-19 12:41:45.762812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.560 [2024-11-19 12:41:45.762833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.560 [2024-11-19 12:41:45.767522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.560 [2024-11-19 12:41:45.767910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.560 [2024-11-19 12:41:45.767936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.560 [2024-11-19 12:41:45.773260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.560 [2024-11-19 12:41:45.773568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.560 [2024-11-19 12:41:45.773594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.561 [2024-11-19 12:41:45.778890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.561 [2024-11-19 12:41:45.779233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.561 [2024-11-19 12:41:45.779260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.561 [2024-11-19 12:41:45.784034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.561 [2024-11-19 12:41:45.784304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.561 [2024-11-19 12:41:45.784330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.561 [2024-11-19 12:41:45.788812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.561 [2024-11-19 12:41:45.789059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.561 [2024-11-19 12:41:45.789084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.561 [2024-11-19 12:41:45.793494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.561 [2024-11-19 12:41:45.793769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.561 [2024-11-19 12:41:45.793788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.561 [2024-11-19 12:41:45.798113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.561 [2024-11-19 12:41:45.798426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.561 [2024-11-19 12:41:45.798452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.561 [2024-11-19 12:41:45.802949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.561 [2024-11-19 12:41:45.803233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.561 [2024-11-19 12:41:45.803257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.561 [2024-11-19 12:41:45.807684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.561 [2024-11-19 12:41:45.807963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.561 [2024-11-19 12:41:45.807987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.561 [2024-11-19 12:41:45.812761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.561 [2024-11-19 12:41:45.813076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.561 [2024-11-19 12:41:45.813119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.821 [2024-11-19 12:41:45.817904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.821 [2024-11-19 12:41:45.818169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.821 [2024-11-19 12:41:45.818195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.821 [2024-11-19 12:41:45.823124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.821 [2024-11-19 12:41:45.823441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.821 [2024-11-19 12:41:45.823469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.821 [2024-11-19 12:41:45.828482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.821 [2024-11-19 12:41:45.828791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.821 [2024-11-19 12:41:45.828817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.821 [2024-11-19 12:41:45.833388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.821 [2024-11-19 12:41:45.833651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.821 [2024-11-19 12:41:45.833684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.821 [2024-11-19 12:41:45.838050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.821 [2024-11-19 12:41:45.838313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.822 [2024-11-19 12:41:45.838339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.822 [2024-11-19 12:41:45.842853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.822 [2024-11-19 12:41:45.843143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.822 [2024-11-19 12:41:45.843168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.822 [2024-11-19 12:41:45.847660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.822 [2024-11-19 12:41:45.847986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.822 [2024-11-19 12:41:45.848011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.822 [2024-11-19 12:41:45.852296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.822 [2024-11-19 12:41:45.852560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.822 [2024-11-19 12:41:45.852585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.822 [2024-11-19 12:41:45.856984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.822 [2024-11-19 12:41:45.857247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.822 [2024-11-19 12:41:45.857272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.822 [2024-11-19 12:41:45.861612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.822 [2024-11-19 12:41:45.861886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.822 [2024-11-19 12:41:45.861911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.822 [2024-11-19 12:41:45.866189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.822 [2024-11-19 12:41:45.866451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.822 [2024-11-19 12:41:45.866476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.822 [2024-11-19 12:41:45.870935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.822 [2024-11-19 12:41:45.871255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.822 [2024-11-19 12:41:45.871279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.822 [2024-11-19 12:41:45.875726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.822 [2024-11-19 12:41:45.876006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.822 [2024-11-19 12:41:45.876030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.822 [2024-11-19 12:41:45.880404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.822 [2024-11-19 12:41:45.880689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.822 [2024-11-19 12:41:45.880724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.822 [2024-11-19 12:41:45.885138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.822 [2024-11-19 12:41:45.885401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.822 [2024-11-19 12:41:45.885426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.822 [2024-11-19 12:41:45.889782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.822 [2024-11-19 12:41:45.890045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.822 [2024-11-19 12:41:45.890069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.822 [2024-11-19 12:41:45.894351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.822 [2024-11-19 12:41:45.894614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.822 [2024-11-19 12:41:45.894639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.822 [2024-11-19 12:41:45.899219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.822 [2024-11-19 12:41:45.899551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.822 [2024-11-19 12:41:45.899592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.822 [2024-11-19 12:41:45.904092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.822 [2024-11-19 12:41:45.904362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.822 [2024-11-19 12:41:45.904387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.822 [2024-11-19 12:41:45.908813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.822 [2024-11-19 12:41:45.909064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.822 [2024-11-19 12:41:45.909089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.822 [2024-11-19 12:41:45.913510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.822 [2024-11-19 12:41:45.913787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.822 [2024-11-19 12:41:45.913806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.822 [2024-11-19 12:41:45.918117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.822 [2024-11-19 12:41:45.918380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.822 [2024-11-19 12:41:45.918405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.822 [2024-11-19 12:41:45.922865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.822 [2024-11-19 12:41:45.923150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.822 [2024-11-19 12:41:45.923177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.822 [2024-11-19 12:41:45.927579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.822 [2024-11-19 12:41:45.927891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.822 [2024-11-19 12:41:45.927916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.822 [2024-11-19 12:41:45.932153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.822 [2024-11-19 12:41:45.932416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.822 [2024-11-19 12:41:45.932440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.822 [2024-11-19 12:41:45.936836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.822 [2024-11-19 12:41:45.937099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.822 [2024-11-19 12:41:45.937124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.822 [2024-11-19 12:41:45.941501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.822 [2024-11-19 12:41:45.941775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.822 [2024-11-19 12:41:45.941795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.822 [2024-11-19 12:41:45.946103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.822 [2024-11-19 12:41:45.946366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.822 [2024-11-19 12:41:45.946424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.822 [2024-11-19 12:41:45.950756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.822 [2024-11-19 12:41:45.951047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.822 [2024-11-19 12:41:45.951072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.822 [2024-11-19 12:41:45.955475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.822 [2024-11-19 12:41:45.955794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.822 [2024-11-19 12:41:45.955814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.822 [2024-11-19 12:41:45.960303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.822 [2024-11-19 12:41:45.960566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.822 [2024-11-19 12:41:45.960636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.822 [2024-11-19 12:41:45.965116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.822 [2024-11-19 12:41:45.965386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.823 [2024-11-19 12:41:45.965411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.823 [2024-11-19 12:41:45.970120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.823 [2024-11-19 12:41:45.970411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.823 [2024-11-19 12:41:45.970437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.823 [2024-11-19 12:41:45.974988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.823 [2024-11-19 12:41:45.975268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.823 [2024-11-19 12:41:45.975293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.823 [2024-11-19 12:41:45.979752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.823 [2024-11-19 12:41:45.980019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.823 [2024-11-19 12:41:45.980043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.823 [2024-11-19 12:41:45.984390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.823 [2024-11-19 12:41:45.984653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.823 [2024-11-19 12:41:45.984686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.823 [2024-11-19 12:41:45.989021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.823 [2024-11-19 12:41:45.989287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.823 [2024-11-19 12:41:45.989306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.823 [2024-11-19 12:41:45.993802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.823 [2024-11-19 12:41:45.994088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.823 [2024-11-19 12:41:45.994114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.823 [2024-11-19 12:41:45.998526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.823 [2024-11-19 12:41:45.998818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.823 [2024-11-19 12:41:45.998842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.823 [2024-11-19 12:41:46.003170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.823 [2024-11-19 12:41:46.003481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.823 [2024-11-19 12:41:46.003507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.823 [2024-11-19 12:41:46.007966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.823 [2024-11-19 12:41:46.008239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.823 [2024-11-19 12:41:46.008265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.823 [2024-11-19 12:41:46.012595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.823 [2024-11-19 12:41:46.012871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.823 [2024-11-19 12:41:46.012896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.823 [2024-11-19 12:41:46.017258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.823 [2024-11-19 12:41:46.017523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.823 [2024-11-19 12:41:46.017548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.823 [2024-11-19 12:41:46.021900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.823 [2024-11-19 12:41:46.022164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.823 [2024-11-19 12:41:46.022189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.823 [2024-11-19 12:41:46.026604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.823 [2024-11-19 12:41:46.026956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.823 [2024-11-19 12:41:46.026984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.823 [2024-11-19 12:41:46.031896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.823 [2024-11-19 12:41:46.032190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.823 [2024-11-19 12:41:46.032215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.823 [2024-11-19 12:41:46.036805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.823 [2024-11-19 12:41:46.037088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.823 [2024-11-19 12:41:46.037113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.823 [2024-11-19 12:41:46.041634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.823 [2024-11-19 12:41:46.041928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.823 [2024-11-19 12:41:46.041953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.823 [2024-11-19 12:41:46.046410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.823 [2024-11-19 12:41:46.046674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.823 [2024-11-19 12:41:46.046709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.823 [2024-11-19 12:41:46.051219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.823 [2024-11-19 12:41:46.051606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.823 [2024-11-19 12:41:46.051679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.823 [2024-11-19 12:41:46.056352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.823 [2024-11-19 12:41:46.056617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.823 [2024-11-19 12:41:46.056642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.823 [2024-11-19 12:41:46.061328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.823 [2024-11-19 12:41:46.061634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.823 [2024-11-19 12:41:46.061655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.823 [2024-11-19 12:41:46.066462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.823 [2024-11-19 12:41:46.066795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.823 [2024-11-19 12:41:46.066817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.823 [2024-11-19 12:41:46.071757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:40.823 [2024-11-19 12:41:46.072065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.823 [2024-11-19 12:41:46.072101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.084 [2024-11-19 12:41:46.077807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.084 [2024-11-19 12:41:46.078163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.084 [2024-11-19 12:41:46.078221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.085 [2024-11-19 12:41:46.083851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.085 [2024-11-19 12:41:46.084238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.085 [2024-11-19 12:41:46.084285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.085 [2024-11-19 12:41:46.089646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.085 [2024-11-19 12:41:46.090016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.085 [2024-11-19 12:41:46.090074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.085 [2024-11-19 12:41:46.095181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.085 [2024-11-19 12:41:46.095503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.085 [2024-11-19 12:41:46.095532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.085 [2024-11-19 12:41:46.100657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.085 [2024-11-19 12:41:46.100995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.085 [2024-11-19 12:41:46.101021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.085 [2024-11-19 12:41:46.105925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.085 [2024-11-19 12:41:46.106225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.085 [2024-11-19 12:41:46.106249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.085 [2024-11-19 12:41:46.111193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.085 [2024-11-19 12:41:46.111533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.085 [2024-11-19 12:41:46.111562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.085 [2024-11-19 12:41:46.116603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.085 [2024-11-19 12:41:46.116960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.085 [2024-11-19 12:41:46.116986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.085 [2024-11-19 12:41:46.121833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.085 [2024-11-19 12:41:46.122102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.085 [2024-11-19 12:41:46.122128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.085 [2024-11-19 12:41:46.126596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.085 [2024-11-19 12:41:46.126931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.085 [2024-11-19 12:41:46.126956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.085 [2024-11-19 12:41:46.131997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.085 [2024-11-19 12:41:46.132295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.085 [2024-11-19 12:41:46.132330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.085 [2024-11-19 12:41:46.136692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.085 [2024-11-19 12:41:46.136954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.085 [2024-11-19 12:41:46.136978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.085 [2024-11-19 12:41:46.141335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.085 [2024-11-19 12:41:46.141599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.085 [2024-11-19 12:41:46.141624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.085 [2024-11-19 12:41:46.145953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.085 [2024-11-19 12:41:46.146217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.085 [2024-11-19 12:41:46.146242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.085 [2024-11-19 12:41:46.150566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.085 [2024-11-19 12:41:46.150841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.085 [2024-11-19 12:41:46.150865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.085 [2024-11-19 12:41:46.155218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.085 [2024-11-19 12:41:46.155553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.085 [2024-11-19 12:41:46.155581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.085 [2024-11-19 12:41:46.160000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.085 [2024-11-19 12:41:46.160297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.085 [2024-11-19 12:41:46.160322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.085 [2024-11-19 12:41:46.164806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.085 [2024-11-19 12:41:46.165091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.085 [2024-11-19 12:41:46.165116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.085 [2024-11-19 12:41:46.169385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.085 [2024-11-19 12:41:46.169649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.085 [2024-11-19 12:41:46.169684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.085 [2024-11-19 12:41:46.174177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.085 [2024-11-19 12:41:46.174447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.085 [2024-11-19 12:41:46.174473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.085 [2024-11-19 12:41:46.178878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.085 [2024-11-19 12:41:46.179144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.085 [2024-11-19 12:41:46.179168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.085 [2024-11-19 12:41:46.183658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.085 [2024-11-19 12:41:46.183958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.085 [2024-11-19 12:41:46.183983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.085 [2024-11-19 12:41:46.188282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.085 [2024-11-19 12:41:46.188545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.085 [2024-11-19 12:41:46.188570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.085 [2024-11-19 12:41:46.192900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.085 [2024-11-19 12:41:46.193181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.085 [2024-11-19 12:41:46.193206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.085 [2024-11-19 12:41:46.197530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.085 [2024-11-19 12:41:46.197808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.085 [2024-11-19 12:41:46.197828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.085 [2024-11-19 12:41:46.202159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.085 [2024-11-19 12:41:46.202423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.085 [2024-11-19 12:41:46.202473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.085 [2024-11-19 12:41:46.206873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.085 [2024-11-19 12:41:46.207139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.085 [2024-11-19 12:41:46.207164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.085 [2024-11-19 12:41:46.211660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.086 [2024-11-19 12:41:46.211957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.086 [2024-11-19 12:41:46.211981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.086 [2024-11-19 12:41:46.216268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.086 [2024-11-19 12:41:46.216534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.086 [2024-11-19 12:41:46.216559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.086 [2024-11-19 12:41:46.220897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.086 [2024-11-19 12:41:46.221182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.086 [2024-11-19 12:41:46.221207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.086 [2024-11-19 12:41:46.225566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.086 [2024-11-19 12:41:46.225844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.086 [2024-11-19 12:41:46.225864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.086 [2024-11-19 12:41:46.230380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.086 [2024-11-19 12:41:46.230693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.086 [2024-11-19 12:41:46.230714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.086 [2024-11-19 12:41:46.235156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.086 [2024-11-19 12:41:46.235479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.086 [2024-11-19 12:41:46.235505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.086 [2024-11-19 12:41:46.239894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.086 [2024-11-19 12:41:46.240160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.086 [2024-11-19 12:41:46.240180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.086 [2024-11-19 12:41:46.244487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.086 [2024-11-19 12:41:46.244780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.086 [2024-11-19 12:41:46.244833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.086 [2024-11-19 12:41:46.249208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.086 [2024-11-19 12:41:46.249471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.086 [2024-11-19 12:41:46.249496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.086 [2024-11-19 12:41:46.253893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.086 [2024-11-19 12:41:46.254157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.086 [2024-11-19 12:41:46.254182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.086 [2024-11-19 12:41:46.258635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.086 [2024-11-19 12:41:46.258909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.086 [2024-11-19 12:41:46.258933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.086 [2024-11-19 12:41:46.263392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.086 [2024-11-19 12:41:46.263766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.086 [2024-11-19 12:41:46.263801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.086 [2024-11-19 12:41:46.268173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.086 [2024-11-19 12:41:46.268437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.086 [2024-11-19 12:41:46.268462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.086 [2024-11-19 12:41:46.272876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.086 [2024-11-19 12:41:46.273162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.086 [2024-11-19 12:41:46.273186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.086 [2024-11-19 12:41:46.277566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.086 [2024-11-19 12:41:46.277849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.086 [2024-11-19 12:41:46.277890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.086 [2024-11-19 12:41:46.282268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.086 [2024-11-19 12:41:46.282535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.086 [2024-11-19 12:41:46.282560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.086 [2024-11-19 12:41:46.286920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.086 [2024-11-19 12:41:46.287183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.086 [2024-11-19 12:41:46.287208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.086 [2024-11-19 12:41:46.291616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.086 [2024-11-19 12:41:46.291949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.086 [2024-11-19 12:41:46.291973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.086 [2024-11-19 12:41:46.296287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.086 [2024-11-19 12:41:46.296550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.086 [2024-11-19 12:41:46.296574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.086 [2024-11-19 12:41:46.301029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.086 [2024-11-19 12:41:46.301308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.086 [2024-11-19 12:41:46.301333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.086 [2024-11-19 12:41:46.305715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.086 [2024-11-19 12:41:46.305977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.086 [2024-11-19 12:41:46.306002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.086 [2024-11-19 12:41:46.310284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.086 [2024-11-19 12:41:46.310563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.086 [2024-11-19 12:41:46.310587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.086 [2024-11-19 12:41:46.314982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.086 [2024-11-19 12:41:46.315247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.086 [2024-11-19 12:41:46.315272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.086 [2024-11-19 12:41:46.319639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.086 [2024-11-19 12:41:46.319945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.086 [2024-11-19 12:41:46.319970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.086 [2024-11-19 12:41:46.324212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.086 [2024-11-19 12:41:46.324554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.086 [2024-11-19 12:41:46.324580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.086 [2024-11-19 12:41:46.329173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.086 [2024-11-19 12:41:46.329437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.086 [2024-11-19 12:41:46.329461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.086 [2024-11-19 12:41:46.333780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.086 [2024-11-19 12:41:46.334042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.086 [2024-11-19 12:41:46.334067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.086 [2024-11-19 12:41:46.338979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.087 [2024-11-19 12:41:46.339350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.087 [2024-11-19 12:41:46.339390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.347 [2024-11-19 12:41:46.344220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.347 [2024-11-19 12:41:46.344483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.347 [2024-11-19 12:41:46.344508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.347 [2024-11-19 12:41:46.349287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.347 [2024-11-19 12:41:46.349549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.347 [2024-11-19 12:41:46.349574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.347 [2024-11-19 12:41:46.353959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.347 [2024-11-19 12:41:46.354222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.347 [2024-11-19 12:41:46.354247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.347 [2024-11-19 12:41:46.358620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.347 [2024-11-19 12:41:46.358895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.347 [2024-11-19 12:41:46.358920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.347 [2024-11-19 12:41:46.363497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.348 [2024-11-19 12:41:46.363878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.348 [2024-11-19 12:41:46.363903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.348 [2024-11-19 12:41:46.368301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.348 [2024-11-19 12:41:46.368567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.348 [2024-11-19 12:41:46.368592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.348 [2024-11-19 12:41:46.372942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.348 [2024-11-19 12:41:46.373225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.348 [2024-11-19 12:41:46.373250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.348 [2024-11-19 12:41:46.377608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.348 [2024-11-19 12:41:46.377904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.348 [2024-11-19 12:41:46.377929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.348 [2024-11-19 12:41:46.382338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.348 [2024-11-19 12:41:46.382612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.348 [2024-11-19 12:41:46.382637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.348 [2024-11-19 12:41:46.387022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.348 [2024-11-19 12:41:46.387288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.348 [2024-11-19 12:41:46.387352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.348 [2024-11-19 12:41:46.391609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.348 [2024-11-19 12:41:46.391928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.348 [2024-11-19 12:41:46.391953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.348 [2024-11-19 12:41:46.396368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.348 [2024-11-19 12:41:46.396632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.348 [2024-11-19 12:41:46.396657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.348 [2024-11-19 12:41:46.401029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.348 [2024-11-19 12:41:46.401311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.348 [2024-11-19 12:41:46.401335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.348 [2024-11-19 12:41:46.405736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.348 [2024-11-19 12:41:46.406001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.348 [2024-11-19 12:41:46.406025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.348 [2024-11-19 12:41:46.410362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.348 [2024-11-19 12:41:46.410641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.348 [2024-11-19 12:41:46.410676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.348 [2024-11-19 12:41:46.415124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.348 [2024-11-19 12:41:46.415439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.348 [2024-11-19 12:41:46.415465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.348 [2024-11-19 12:41:46.419818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.348 [2024-11-19 12:41:46.420080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.348 [2024-11-19 12:41:46.420104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.348 [2024-11-19 12:41:46.424521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.348 [2024-11-19 12:41:46.424815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.348 [2024-11-19 12:41:46.424840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.348 [2024-11-19 12:41:46.429332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.348 [2024-11-19 12:41:46.429595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.348 [2024-11-19 12:41:46.429620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.348 [2024-11-19 12:41:46.433930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.348 [2024-11-19 12:41:46.434194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.348 [2024-11-19 12:41:46.434218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.348 [2024-11-19 12:41:46.438557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.348 [2024-11-19 12:41:46.438832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.348 [2024-11-19 12:41:46.438852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.348 [2024-11-19 12:41:46.443259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.348 [2024-11-19 12:41:46.443608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.348 [2024-11-19 12:41:46.443650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.348 [2024-11-19 12:41:46.447980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.348 [2024-11-19 12:41:46.448245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.348 [2024-11-19 12:41:46.448271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.348 [2024-11-19 12:41:46.452510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.348 [2024-11-19 12:41:46.452808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.348 [2024-11-19 12:41:46.452832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.348 [2024-11-19 12:41:46.457231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.348 [2024-11-19 12:41:46.457493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.348 [2024-11-19 12:41:46.457518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.348 [2024-11-19 12:41:46.461914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.348 [2024-11-19 12:41:46.462180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.348 [2024-11-19 12:41:46.462204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.348 [2024-11-19 12:41:46.466521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.348 [2024-11-19 12:41:46.466798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.348 [2024-11-19 12:41:46.466818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.348 [2024-11-19 12:41:46.471114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.348 [2024-11-19 12:41:46.471425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.348 [2024-11-19 12:41:46.471451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.348 [2024-11-19 12:41:46.476082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.348 [2024-11-19 12:41:46.476355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.348 [2024-11-19 12:41:46.476381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.348 6424.00 IOPS, 803.00 MiB/s [2024-11-19T12:41:46.608Z] [2024-11-19 12:41:46.481822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.348 [2024-11-19 12:41:46.482087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.348 [2024-11-19 12:41:46.482112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.348 [2024-11-19 12:41:46.486459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.348 [2024-11-19 12:41:46.486738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.348 [2024-11-19 12:41:46.486758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.349 [2024-11-19 12:41:46.491137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.349 [2024-11-19 12:41:46.491479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.349 [2024-11-19 12:41:46.491506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.349 [2024-11-19 12:41:46.496016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.349 [2024-11-19 12:41:46.496280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.349 [2024-11-19 12:41:46.496305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.349 [2024-11-19 12:41:46.500624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.349 [2024-11-19 12:41:46.500920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.349 [2024-11-19 12:41:46.500946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.349 [2024-11-19 12:41:46.505257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.349 [2024-11-19 12:41:46.505520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.349 [2024-11-19 12:41:46.505545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.349 [2024-11-19 12:41:46.509953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.349 [2024-11-19 12:41:46.510238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.349 [2024-11-19 12:41:46.510264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.349 [2024-11-19 12:41:46.514979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.349 [2024-11-19 12:41:46.515260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.349 [2024-11-19 12:41:46.515286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.349 [2024-11-19 12:41:46.520064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.349 [2024-11-19 12:41:46.520352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.349 [2024-11-19 12:41:46.520378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.349 [2024-11-19 12:41:46.525236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.349 [2024-11-19 12:41:46.525508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.349 [2024-11-19 12:41:46.525533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.349 [2024-11-19 12:41:46.530401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.349 [2024-11-19 12:41:46.530686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.349 [2024-11-19 12:41:46.530726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.349 [2024-11-19 12:41:46.535621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.349 [2024-11-19 12:41:46.536037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.349 [2024-11-19 12:41:46.536068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.349 [2024-11-19 12:41:46.540893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.349 [2024-11-19 12:41:46.541163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.349 [2024-11-19 12:41:46.541200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.349 [2024-11-19 12:41:46.546151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.349 [2024-11-19 12:41:46.546422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.349 [2024-11-19 12:41:46.546448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.349 [2024-11-19 12:41:46.551355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.349 [2024-11-19 12:41:46.551646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.349 [2024-11-19 12:41:46.551731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.349 [2024-11-19 12:41:46.556620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.349 [2024-11-19 12:41:46.556943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.349 [2024-11-19 12:41:46.556985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.349 [2024-11-19 12:41:46.561529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.349 [2024-11-19 12:41:46.561832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.349 [2024-11-19 12:41:46.561857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.349 [2024-11-19 12:41:46.566454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.349 [2024-11-19 12:41:46.566758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.349 [2024-11-19 12:41:46.566784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.349 [2024-11-19 12:41:46.571580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.349 [2024-11-19 12:41:46.571899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.349 [2024-11-19 12:41:46.571924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.349 [2024-11-19 12:41:46.576414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.349 [2024-11-19 12:41:46.576687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.349 [2024-11-19 12:41:46.576722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.349 [2024-11-19 12:41:46.581214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.349 [2024-11-19 12:41:46.581487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.349 [2024-11-19 12:41:46.581511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.349 [2024-11-19 12:41:46.585979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.349 [2024-11-19 12:41:46.586253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.349 [2024-11-19 12:41:46.586278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.349 [2024-11-19 12:41:46.591085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.349 [2024-11-19 12:41:46.591397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.349 [2024-11-19 12:41:46.591423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.349 [2024-11-19 12:41:46.596055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.349 [2024-11-19 12:41:46.596325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.349 [2024-11-19 12:41:46.596350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.349 [2024-11-19 12:41:46.601105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.349 [2024-11-19 12:41:46.601408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.349 [2024-11-19 12:41:46.601434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.610 [2024-11-19 12:41:46.606291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.610 [2024-11-19 12:41:46.606561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.610 [2024-11-19 12:41:46.606587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.610 [2024-11-19 12:41:46.611810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.610 [2024-11-19 12:41:46.612082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.610 [2024-11-19 12:41:46.612107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.610 [2024-11-19 12:41:46.616604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.610 [2024-11-19 12:41:46.616887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.610 [2024-11-19 12:41:46.616912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.610 [2024-11-19 12:41:46.621446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.610 [2024-11-19 12:41:46.621732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.610 [2024-11-19 12:41:46.621757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.610 [2024-11-19 12:41:46.626258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.610 [2024-11-19 12:41:46.626526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.610 [2024-11-19 12:41:46.626551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.610 [2024-11-19 12:41:46.631292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.610 [2024-11-19 12:41:46.631681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.610 [2024-11-19 12:41:46.631730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.610 [2024-11-19 12:41:46.636184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.610 [2024-11-19 12:41:46.636454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.610 [2024-11-19 12:41:46.636479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.610 [2024-11-19 12:41:46.641020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.610 [2024-11-19 12:41:46.641290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.610 [2024-11-19 12:41:46.641315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.610 [2024-11-19 12:41:46.645758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.610 [2024-11-19 12:41:46.646031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.610 [2024-11-19 12:41:46.646056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.610 [2024-11-19 12:41:46.650755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.610 [2024-11-19 12:41:46.651025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.610 [2024-11-19 12:41:46.651051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.610 [2024-11-19 12:41:46.655611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.610 [2024-11-19 12:41:46.655994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.610 [2024-11-19 12:41:46.656020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.610 [2024-11-19 12:41:46.660573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.610 [2024-11-19 12:41:46.660855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.610 [2024-11-19 12:41:46.660880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.610 [2024-11-19 12:41:46.665608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.610 [2024-11-19 12:41:46.665925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.610 [2024-11-19 12:41:46.665952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.610 [2024-11-19 12:41:46.670460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.610 [2024-11-19 12:41:46.670743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.610 [2024-11-19 12:41:46.670769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.610 [2024-11-19 12:41:46.675201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.611 [2024-11-19 12:41:46.675520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.611 [2024-11-19 12:41:46.675546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.611 [2024-11-19 12:41:46.680420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.611 [2024-11-19 12:41:46.680690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.611 [2024-11-19 12:41:46.680727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.611 [2024-11-19 12:41:46.685226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.611 [2024-11-19 12:41:46.685495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.611 [2024-11-19 12:41:46.685521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.611 [2024-11-19 12:41:46.690026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.611 [2024-11-19 12:41:46.690300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.611 [2024-11-19 12:41:46.690324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.611 [2024-11-19 12:41:46.695061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.611 [2024-11-19 12:41:46.695392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.611 [2024-11-19 12:41:46.695419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.611 [2024-11-19 12:41:46.700041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.611 [2024-11-19 12:41:46.700330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.611 [2024-11-19 12:41:46.700355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.611 [2024-11-19 12:41:46.704826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.611 [2024-11-19 12:41:46.705096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.611 [2024-11-19 12:41:46.705120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.611 [2024-11-19 12:41:46.709791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.611 [2024-11-19 12:41:46.710103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.611 [2024-11-19 12:41:46.710128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.611 [2024-11-19 12:41:46.714610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.611 [2024-11-19 12:41:46.714894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.611 [2024-11-19 12:41:46.714919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.611 [2024-11-19 12:41:46.719582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.611 [2024-11-19 12:41:46.719941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.611 [2024-11-19 12:41:46.719969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.611 [2024-11-19 12:41:46.724446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.611 [2024-11-19 12:41:46.724728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.611 [2024-11-19 12:41:46.724748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.611 [2024-11-19 12:41:46.729372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.611 [2024-11-19 12:41:46.729638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.611 [2024-11-19 12:41:46.729689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.611 [2024-11-19 12:41:46.734205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.611 [2024-11-19 12:41:46.734467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.611 [2024-11-19 12:41:46.734491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.611 [2024-11-19 12:41:46.738984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.611 [2024-11-19 12:41:46.739232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.611 [2024-11-19 12:41:46.739256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.611 [2024-11-19 12:41:46.743734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.611 [2024-11-19 12:41:46.744011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.611 [2024-11-19 12:41:46.744035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.611 [2024-11-19 12:41:46.748325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.611 [2024-11-19 12:41:46.748588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.611 [2024-11-19 12:41:46.748613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.611 [2024-11-19 12:41:46.753102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.611 [2024-11-19 12:41:46.753366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.611 [2024-11-19 12:41:46.753391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.611 [2024-11-19 12:41:46.757952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.611 [2024-11-19 12:41:46.758215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.611 [2024-11-19 12:41:46.758240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.611 [2024-11-19 12:41:46.762580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.611 [2024-11-19 12:41:46.762857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.611 [2024-11-19 12:41:46.762882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.611 [2024-11-19 12:41:46.767442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.611 [2024-11-19 12:41:46.767773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.611 [2024-11-19 12:41:46.767798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.611 [2024-11-19 12:41:46.772154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.611 [2024-11-19 12:41:46.772417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.611 [2024-11-19 12:41:46.772442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.611 [2024-11-19 12:41:46.776765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.611 [2024-11-19 12:41:46.777030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.611 [2024-11-19 12:41:46.777054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.611 [2024-11-19 12:41:46.781441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.611 [2024-11-19 12:41:46.781718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.611 [2024-11-19 12:41:46.781743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.611 [2024-11-19 12:41:46.786172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.611 [2024-11-19 12:41:46.786434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.611 [2024-11-19 12:41:46.786459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.611 [2024-11-19 12:41:46.790817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.611 [2024-11-19 12:41:46.791082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.611 [2024-11-19 12:41:46.791106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.611 [2024-11-19 12:41:46.795415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.611 [2024-11-19 12:41:46.795756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.611 [2024-11-19 12:41:46.795791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.611 [2024-11-19 12:41:46.800138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.611 [2024-11-19 12:41:46.800404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.611 [2024-11-19 12:41:46.800429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.611 [2024-11-19 12:41:46.804810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.612 [2024-11-19 12:41:46.805074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.612 [2024-11-19 12:41:46.805099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.612 [2024-11-19 12:41:46.809414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.612 [2024-11-19 12:41:46.809691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.612 [2024-11-19 12:41:46.809715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.612 [2024-11-19 12:41:46.814152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.612 [2024-11-19 12:41:46.814415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.612 [2024-11-19 12:41:46.814440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.612 [2024-11-19 12:41:46.818851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.612 [2024-11-19 12:41:46.819114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.612 [2024-11-19 12:41:46.819138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.612 [2024-11-19 12:41:46.823493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.612 [2024-11-19 12:41:46.823838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.612 [2024-11-19 12:41:46.823865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.612 [2024-11-19 12:41:46.828250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.612 [2024-11-19 12:41:46.828514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.612 [2024-11-19 12:41:46.828539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.612 [2024-11-19 12:41:46.832986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.612 [2024-11-19 12:41:46.833269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.612 [2024-11-19 12:41:46.833294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.612 [2024-11-19 12:41:46.837696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.612 [2024-11-19 12:41:46.837959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.612 [2024-11-19 12:41:46.837984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.612 [2024-11-19 12:41:46.842276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.612 [2024-11-19 12:41:46.842538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.612 [2024-11-19 12:41:46.842564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.612 [2024-11-19 12:41:46.846888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.612 [2024-11-19 12:41:46.847150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.612 [2024-11-19 12:41:46.847176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.612 [2024-11-19 12:41:46.851771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.612 [2024-11-19 12:41:46.852063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.612 [2024-11-19 12:41:46.852087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.612 [2024-11-19 12:41:46.856630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.612 [2024-11-19 12:41:46.856936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.612 [2024-11-19 12:41:46.856962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.612 [2024-11-19 12:41:46.861571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.612 [2024-11-19 12:41:46.861910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.612 [2024-11-19 12:41:46.861936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.873 [2024-11-19 12:41:46.866988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.873 [2024-11-19 12:41:46.867320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.873 [2024-11-19 12:41:46.867378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.873 [2024-11-19 12:41:46.872000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.873 [2024-11-19 12:41:46.872330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.873 [2024-11-19 12:41:46.872356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.873 [2024-11-19 12:41:46.876886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.873 [2024-11-19 12:41:46.877150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.873 [2024-11-19 12:41:46.877169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.873 [2024-11-19 12:41:46.881551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.873 [2024-11-19 12:41:46.881831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.873 [2024-11-19 12:41:46.881850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.873 [2024-11-19 12:41:46.886167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.873 [2024-11-19 12:41:46.886433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.873 [2024-11-19 12:41:46.886459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.873 [2024-11-19 12:41:46.890841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.873 [2024-11-19 12:41:46.891106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.873 [2024-11-19 12:41:46.891130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.873 [2024-11-19 12:41:46.895484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.873 [2024-11-19 12:41:46.895828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.873 [2024-11-19 12:41:46.895853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.873 [2024-11-19 12:41:46.900383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.873 [2024-11-19 12:41:46.900653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.873 [2024-11-19 12:41:46.900688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.873 [2024-11-19 12:41:46.905112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.873 [2024-11-19 12:41:46.905374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.873 [2024-11-19 12:41:46.905399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.873 [2024-11-19 12:41:46.909738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.873 [2024-11-19 12:41:46.910002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.873 [2024-11-19 12:41:46.910026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.873 [2024-11-19 12:41:46.914414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.873 [2024-11-19 12:41:46.914678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.873 [2024-11-19 12:41:46.914711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.873 [2024-11-19 12:41:46.919004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.873 [2024-11-19 12:41:46.919267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.873 [2024-11-19 12:41:46.919291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.873 [2024-11-19 12:41:46.923733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.873 [2024-11-19 12:41:46.924028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.873 [2024-11-19 12:41:46.924068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.873 [2024-11-19 12:41:46.928443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.873 [2024-11-19 12:41:46.928707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.873 [2024-11-19 12:41:46.928758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.873 [2024-11-19 12:41:46.933092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.873 [2024-11-19 12:41:46.933354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.873 [2024-11-19 12:41:46.933379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.873 [2024-11-19 12:41:46.937830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.873 [2024-11-19 12:41:46.938113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.873 [2024-11-19 12:41:46.938139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.873 [2024-11-19 12:41:46.942473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.873 [2024-11-19 12:41:46.942748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.873 [2024-11-19 12:41:46.942768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.873 [2024-11-19 12:41:46.947104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.873 [2024-11-19 12:41:46.947416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.873 [2024-11-19 12:41:46.947442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.873 [2024-11-19 12:41:46.951851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.873 [2024-11-19 12:41:46.952123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.873 [2024-11-19 12:41:46.952147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.873 [2024-11-19 12:41:46.956599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.873 [2024-11-19 12:41:46.956887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.873 [2024-11-19 12:41:46.956912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.873 [2024-11-19 12:41:46.961271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.874 [2024-11-19 12:41:46.961537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.874 [2024-11-19 12:41:46.961557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.874 [2024-11-19 12:41:46.965887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.874 [2024-11-19 12:41:46.966166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.874 [2024-11-19 12:41:46.966203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.874 [2024-11-19 12:41:46.970427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.874 [2024-11-19 12:41:46.970704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.874 [2024-11-19 12:41:46.970724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.874 [2024-11-19 12:41:46.974996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.874 [2024-11-19 12:41:46.975276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.874 [2024-11-19 12:41:46.975352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.874 [2024-11-19 12:41:46.979741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.874 [2024-11-19 12:41:46.980026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.874 [2024-11-19 12:41:46.980051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.874 [2024-11-19 12:41:46.984313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.874 [2024-11-19 12:41:46.984579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.874 [2024-11-19 12:41:46.984603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.874 [2024-11-19 12:41:46.989011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.874 [2024-11-19 12:41:46.989274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.874 [2024-11-19 12:41:46.989299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.874 [2024-11-19 12:41:46.993647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.874 [2024-11-19 12:41:46.993922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.874 [2024-11-19 12:41:46.993946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.874 [2024-11-19 12:41:46.998312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.874 [2024-11-19 12:41:46.998585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.874 [2024-11-19 12:41:46.998610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.874 [2024-11-19 12:41:47.003017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.874 [2024-11-19 12:41:47.003286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.874 [2024-11-19 12:41:47.003351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.874 [2024-11-19 12:41:47.007892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.874 [2024-11-19 12:41:47.008179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.874 [2024-11-19 12:41:47.008204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.874 [2024-11-19 12:41:47.012556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.874 [2024-11-19 12:41:47.012831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.874 [2024-11-19 12:41:47.012851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.874 [2024-11-19 12:41:47.017131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.874 [2024-11-19 12:41:47.017395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.874 [2024-11-19 12:41:47.017445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.874 [2024-11-19 12:41:47.021921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.874 [2024-11-19 12:41:47.022194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.874 [2024-11-19 12:41:47.022219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.874 [2024-11-19 12:41:47.026507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.874 [2024-11-19 12:41:47.026784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.874 [2024-11-19 12:41:47.026803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.874 [2024-11-19 12:41:47.031287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.874 [2024-11-19 12:41:47.031636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.874 [2024-11-19 12:41:47.031718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.874 [2024-11-19 12:41:47.036137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.874 [2024-11-19 12:41:47.036400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.874 [2024-11-19 12:41:47.036424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.874 [2024-11-19 12:41:47.040881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.874 [2024-11-19 12:41:47.041150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.874 [2024-11-19 12:41:47.041175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.874 [2024-11-19 12:41:47.045467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.874 [2024-11-19 12:41:47.045748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.874 [2024-11-19 12:41:47.045773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.874 [2024-11-19 12:41:47.050077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.874 [2024-11-19 12:41:47.050340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.874 [2024-11-19 12:41:47.050365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.874 [2024-11-19 12:41:47.054692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.874 [2024-11-19 12:41:47.054955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.874 [2024-11-19 12:41:47.054979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.874 [2024-11-19 12:41:47.059479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.874 [2024-11-19 12:41:47.059807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.874 [2024-11-19 12:41:47.059832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.874 [2024-11-19 12:41:47.064189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.874 [2024-11-19 12:41:47.064454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.874 [2024-11-19 12:41:47.064478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.874 [2024-11-19 12:41:47.068964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.874 [2024-11-19 12:41:47.069248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.874 [2024-11-19 12:41:47.069272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.874 [2024-11-19 12:41:47.073592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.874 [2024-11-19 12:41:47.073870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.874 [2024-11-19 12:41:47.073895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.874 [2024-11-19 12:41:47.078180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.874 [2024-11-19 12:41:47.078444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.874 [2024-11-19 12:41:47.078468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.874 [2024-11-19 12:41:47.082865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.874 [2024-11-19 12:41:47.083131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.874 [2024-11-19 12:41:47.083155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.874 [2024-11-19 12:41:47.087703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.875 [2024-11-19 12:41:47.087983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.875 [2024-11-19 12:41:47.088008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.875 [2024-11-19 12:41:47.092759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.875 [2024-11-19 12:41:47.093026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.875 [2024-11-19 12:41:47.093081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.875 [2024-11-19 12:41:47.097867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.875 [2024-11-19 12:41:47.098166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.875 [2024-11-19 12:41:47.098191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.875 [2024-11-19 12:41:47.103221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.875 [2024-11-19 12:41:47.103602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.875 [2024-11-19 12:41:47.103630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.875 [2024-11-19 12:41:47.108895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.875 [2024-11-19 12:41:47.109240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.875 [2024-11-19 12:41:47.109273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.875 [2024-11-19 12:41:47.114311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.875 [2024-11-19 12:41:47.114578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.875 [2024-11-19 12:41:47.114603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.875 [2024-11-19 12:41:47.119919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.875 [2024-11-19 12:41:47.120241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.875 [2024-11-19 12:41:47.120266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.875 [2024-11-19 12:41:47.125168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:41.875 [2024-11-19 12:41:47.125511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.875 [2024-11-19 12:41:47.125536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.135 [2024-11-19 12:41:47.130957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.135 [2024-11-19 12:41:47.131270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.135 [2024-11-19 12:41:47.131302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.135 [2024-11-19 12:41:47.136468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.136 [2024-11-19 12:41:47.136767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.136 [2024-11-19 12:41:47.136803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.136 [2024-11-19 12:41:47.141498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.136 [2024-11-19 12:41:47.141775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.136 [2024-11-19 12:41:47.141800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.136 [2024-11-19 12:41:47.146162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.136 [2024-11-19 12:41:47.146428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.136 [2024-11-19 12:41:47.146453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.136 [2024-11-19 12:41:47.150962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.136 [2024-11-19 12:41:47.151267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.136 [2024-11-19 12:41:47.151291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.136 [2024-11-19 12:41:47.155863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.136 [2024-11-19 12:41:47.156147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.136 [2024-11-19 12:41:47.156171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.136 [2024-11-19 12:41:47.160603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.136 [2024-11-19 12:41:47.160880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.136 [2024-11-19 12:41:47.160906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.136 [2024-11-19 12:41:47.165155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.136 [2024-11-19 12:41:47.165421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.136 [2024-11-19 12:41:47.165446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.136 [2024-11-19 12:41:47.169946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.136 [2024-11-19 12:41:47.170219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.136 [2024-11-19 12:41:47.170246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.136 [2024-11-19 12:41:47.174571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.136 [2024-11-19 12:41:47.174850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.136 [2024-11-19 12:41:47.174875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.136 [2024-11-19 12:41:47.179375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.136 [2024-11-19 12:41:47.179732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.136 [2024-11-19 12:41:47.179767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.136 [2024-11-19 12:41:47.184230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.136 [2024-11-19 12:41:47.184493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.136 [2024-11-19 12:41:47.184518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.136 [2024-11-19 12:41:47.188841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.136 [2024-11-19 12:41:47.189105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.136 [2024-11-19 12:41:47.189130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.136 [2024-11-19 12:41:47.193448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.136 [2024-11-19 12:41:47.193726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.136 [2024-11-19 12:41:47.193751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.136 [2024-11-19 12:41:47.198061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.136 [2024-11-19 12:41:47.198326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.136 [2024-11-19 12:41:47.198351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.136 [2024-11-19 12:41:47.202678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.136 [2024-11-19 12:41:47.202942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.136 [2024-11-19 12:41:47.202966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.136 [2024-11-19 12:41:47.207233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.136 [2024-11-19 12:41:47.207549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.136 [2024-11-19 12:41:47.207574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.136 [2024-11-19 12:41:47.211992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.136 [2024-11-19 12:41:47.212272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.136 [2024-11-19 12:41:47.212296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.136 [2024-11-19 12:41:47.216574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.136 [2024-11-19 12:41:47.216850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.136 [2024-11-19 12:41:47.216875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.136 [2024-11-19 12:41:47.221227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.136 [2024-11-19 12:41:47.221492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.136 [2024-11-19 12:41:47.221517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.136 [2024-11-19 12:41:47.225831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.136 [2024-11-19 12:41:47.226093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.136 [2024-11-19 12:41:47.226117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.136 [2024-11-19 12:41:47.230480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.136 [2024-11-19 12:41:47.230753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.136 [2024-11-19 12:41:47.230772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.136 [2024-11-19 12:41:47.235056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.136 [2024-11-19 12:41:47.235376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.136 [2024-11-19 12:41:47.235413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.136 [2024-11-19 12:41:47.239812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.136 [2024-11-19 12:41:47.240104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.136 [2024-11-19 12:41:47.240129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.136 [2024-11-19 12:41:47.244349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.136 [2024-11-19 12:41:47.244614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.136 [2024-11-19 12:41:47.244633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.136 [2024-11-19 12:41:47.248997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.136 [2024-11-19 12:41:47.249258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.136 [2024-11-19 12:41:47.249310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.136 [2024-11-19 12:41:47.253749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.136 [2024-11-19 12:41:47.254021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.136 [2024-11-19 12:41:47.254046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.136 [2024-11-19 12:41:47.258377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.137 [2024-11-19 12:41:47.258641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.137 [2024-11-19 12:41:47.258675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.137 [2024-11-19 12:41:47.263003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.137 [2024-11-19 12:41:47.263265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.137 [2024-11-19 12:41:47.263290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.137 [2024-11-19 12:41:47.267616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.137 [2024-11-19 12:41:47.267966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.137 [2024-11-19 12:41:47.267991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.137 [2024-11-19 12:41:47.272413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.137 [2024-11-19 12:41:47.272678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.137 [2024-11-19 12:41:47.272727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.137 [2024-11-19 12:41:47.277179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.137 [2024-11-19 12:41:47.277445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.137 [2024-11-19 12:41:47.277470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.137 [2024-11-19 12:41:47.281906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.137 [2024-11-19 12:41:47.282173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.137 [2024-11-19 12:41:47.282198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.137 [2024-11-19 12:41:47.286687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.137 [2024-11-19 12:41:47.286978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.137 [2024-11-19 12:41:47.287003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.137 [2024-11-19 12:41:47.291829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.137 [2024-11-19 12:41:47.292109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.137 [2024-11-19 12:41:47.292134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.137 [2024-11-19 12:41:47.296656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.137 [2024-11-19 12:41:47.296982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.137 [2024-11-19 12:41:47.297008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.137 [2024-11-19 12:41:47.301442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.137 [2024-11-19 12:41:47.301705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.137 [2024-11-19 12:41:47.301741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.137 [2024-11-19 12:41:47.306113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.137 [2024-11-19 12:41:47.306379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.137 [2024-11-19 12:41:47.306404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.137 [2024-11-19 12:41:47.310733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.137 [2024-11-19 12:41:47.310995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.137 [2024-11-19 12:41:47.311020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.137 [2024-11-19 12:41:47.315461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.137 [2024-11-19 12:41:47.315815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.137 [2024-11-19 12:41:47.315839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.137 [2024-11-19 12:41:47.320273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.137 [2024-11-19 12:41:47.320544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.137 [2024-11-19 12:41:47.320569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.137 [2024-11-19 12:41:47.325007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.137 [2024-11-19 12:41:47.325289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.137 [2024-11-19 12:41:47.325313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.137 [2024-11-19 12:41:47.329768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.137 [2024-11-19 12:41:47.330050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.137 [2024-11-19 12:41:47.330075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.137 [2024-11-19 12:41:47.334546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.137 [2024-11-19 12:41:47.334830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.137 [2024-11-19 12:41:47.334855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.137 [2024-11-19 12:41:47.339256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.137 [2024-11-19 12:41:47.339566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.137 [2024-11-19 12:41:47.339606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.137 [2024-11-19 12:41:47.344031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.137 [2024-11-19 12:41:47.344296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.137 [2024-11-19 12:41:47.344321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.137 [2024-11-19 12:41:47.348625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.137 [2024-11-19 12:41:47.348921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.137 [2024-11-19 12:41:47.348947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.137 [2024-11-19 12:41:47.353329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.137 [2024-11-19 12:41:47.353591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.137 [2024-11-19 12:41:47.353616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.137 [2024-11-19 12:41:47.358008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.137 [2024-11-19 12:41:47.358274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.137 [2024-11-19 12:41:47.358299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.137 [2024-11-19 12:41:47.362694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.137 [2024-11-19 12:41:47.362957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.137 [2024-11-19 12:41:47.362981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.137 [2024-11-19 12:41:47.367197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.137 [2024-11-19 12:41:47.367508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.137 [2024-11-19 12:41:47.367534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.137 [2024-11-19 12:41:47.371927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.137 [2024-11-19 12:41:47.372192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.137 [2024-11-19 12:41:47.372216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.137 [2024-11-19 12:41:47.376588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.137 [2024-11-19 12:41:47.376885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.137 [2024-11-19 12:41:47.376911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.137 [2024-11-19 12:41:47.381269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.137 [2024-11-19 12:41:47.381536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.137 [2024-11-19 12:41:47.381560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.137 [2024-11-19 12:41:47.385899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.138 [2024-11-19 12:41:47.386166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.138 [2024-11-19 12:41:47.386190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.138 [2024-11-19 12:41:47.391034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.138 [2024-11-19 12:41:47.391322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.138 [2024-11-19 12:41:47.391379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.397 [2024-11-19 12:41:47.396203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.397 [2024-11-19 12:41:47.396465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.398 [2024-11-19 12:41:47.396490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.398 [2024-11-19 12:41:47.401274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.398 [2024-11-19 12:41:47.401539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.398 [2024-11-19 12:41:47.401564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.398 [2024-11-19 12:41:47.405979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.398 [2024-11-19 12:41:47.406245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.398 [2024-11-19 12:41:47.406271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.398 [2024-11-19 12:41:47.410620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.398 [2024-11-19 12:41:47.410896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.398 [2024-11-19 12:41:47.410920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.398 [2024-11-19 12:41:47.415356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.398 [2024-11-19 12:41:47.415676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.398 [2024-11-19 12:41:47.415723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.398 [2024-11-19 12:41:47.420136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.398 [2024-11-19 12:41:47.420409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.398 [2024-11-19 12:41:47.420434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.398 [2024-11-19 12:41:47.424858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.398 [2024-11-19 12:41:47.425143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.398 [2024-11-19 12:41:47.425167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.398 [2024-11-19 12:41:47.429445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.398 [2024-11-19 12:41:47.429742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.398 [2024-11-19 12:41:47.429766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.398 [2024-11-19 12:41:47.434172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.398 [2024-11-19 12:41:47.434440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.398 [2024-11-19 12:41:47.434464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.398 [2024-11-19 12:41:47.438809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.398 [2024-11-19 12:41:47.439075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.398 [2024-11-19 12:41:47.439099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.398 [2024-11-19 12:41:47.443398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.398 [2024-11-19 12:41:47.443729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.398 [2024-11-19 12:41:47.443762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.398 [2024-11-19 12:41:47.448228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.398 [2024-11-19 12:41:47.448493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.398 [2024-11-19 12:41:47.448518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.398 [2024-11-19 12:41:47.452841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.398 [2024-11-19 12:41:47.453127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.398 [2024-11-19 12:41:47.453151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.398 [2024-11-19 12:41:47.457585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.398 [2024-11-19 12:41:47.457880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.398 [2024-11-19 12:41:47.457904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.398 [2024-11-19 12:41:47.462174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.398 [2024-11-19 12:41:47.462441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.398 [2024-11-19 12:41:47.462465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.398 [2024-11-19 12:41:47.466746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.398 [2024-11-19 12:41:47.467010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.398 [2024-11-19 12:41:47.467034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.398 [2024-11-19 12:41:47.471346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.398 [2024-11-19 12:41:47.471607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.398 [2024-11-19 12:41:47.471647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.398 [2024-11-19 12:41:47.476110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcae770) with pdu=0x2000198fef90 00:21:42.398 [2024-11-19 12:41:47.476373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.398 [2024-11-19 12:41:47.476397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.398 6436.00 IOPS, 804.50 MiB/s 00:21:42.398 Latency(us) 00:21:42.398 [2024-11-19T12:41:47.658Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.398 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:42.398 nvme0n1 : 2.00 6436.40 804.55 0.00 0.00 2480.25 1400.09 6225.92 00:21:42.398 [2024-11-19T12:41:47.658Z] =================================================================================================================== 00:21:42.398 [2024-11-19T12:41:47.658Z] Total : 6436.40 804.55 0.00 0.00 2480.25 1400.09 6225.92 00:21:42.398 { 00:21:42.398 "results": [ 00:21:42.398 { 00:21:42.398 "job": "nvme0n1", 00:21:42.398 "core_mask": "0x2", 00:21:42.398 "workload": "randwrite", 00:21:42.398 "status": "finished", 00:21:42.398 "queue_depth": 16, 00:21:42.398 "io_size": 131072, 00:21:42.398 "runtime": 2.003603, 00:21:42.398 "iops": 6436.404816722674, 00:21:42.398 "mibps": 804.5506020903342, 00:21:42.398 "io_failed": 0, 00:21:42.398 "io_timeout": 0, 00:21:42.398 "avg_latency_us": 2480.2464425896683, 00:21:42.398 "min_latency_us": 1400.0872727272726, 00:21:42.398 "max_latency_us": 6225.92 00:21:42.398 } 00:21:42.398 ], 00:21:42.398 "core_count": 1 00:21:42.398 } 00:21:42.398 12:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:42.398 12:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:42.398 12:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:42.398 12:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:42.398 | .driver_specific 00:21:42.398 | .nvme_error 00:21:42.398 | .status_code 00:21:42.398 | .command_transient_transport_error' 00:21:42.658 12:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 415 > 0 )) 00:21:42.658 12:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95765 00:21:42.658 12:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 95765 ']' 00:21:42.658 12:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 95765 00:21:42.658 12:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:21:42.658 12:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:42.658 12:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95765 00:21:42.658 12:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:42.658 killing process with pid 95765 00:21:42.658 12:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:42.658 12:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95765' 00:21:42.658 Received shutdown signal, test time was about 2.000000 seconds 00:21:42.658 00:21:42.658 Latency(us) 00:21:42.658 [2024-11-19T12:41:47.918Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.658 [2024-11-19T12:41:47.918Z] =================================================================================================================== 00:21:42.658 [2024-11-19T12:41:47.918Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:42.658 12:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 95765 00:21:42.658 12:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 95765 00:21:42.658 12:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 95578 00:21:42.658 12:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 95578 ']' 00:21:42.658 12:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 95578 00:21:42.658 12:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:21:42.658 12:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:42.658 12:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95578 00:21:42.917 12:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:42.917 12:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:42.917 killing process with pid 95578 00:21:42.917 12:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95578' 00:21:42.917 12:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 95578 00:21:42.917 12:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 95578 00:21:42.917 00:21:42.917 real 0m15.704s 00:21:42.917 user 0m30.812s 00:21:42.917 sys 0m4.312s 00:21:42.917 12:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:42.917 12:41:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:42.917 ************************************ 00:21:42.917 END TEST nvmf_digest_error 00:21:42.917 ************************************ 00:21:42.917 12:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:21:42.917 12:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:21:42.917 12:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@512 -- # nvmfcleanup 00:21:42.917 12:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:21:42.917 12:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:42.917 12:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:21:42.917 12:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:42.917 12:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:43.177 rmmod nvme_tcp 00:21:43.177 rmmod nvme_fabrics 00:21:43.177 rmmod nvme_keyring 00:21:43.177 12:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:43.177 12:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:21:43.177 12:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:21:43.177 12:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@513 -- # '[' -n 95578 ']' 00:21:43.177 12:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # killprocess 95578 00:21:43.177 12:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 95578 ']' 00:21:43.177 12:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 95578 00:21:43.177 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (95578) - No such process 00:21:43.177 Process with pid 95578 is not found 00:21:43.177 12:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 95578 is not found' 00:21:43.177 12:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:21:43.177 12:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:21:43.177 12:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:21:43.177 12:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:21:43.177 12:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-save 00:21:43.177 12:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:21:43.177 12:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-restore 00:21:43.177 12:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:43.177 12:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:43.177 12:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:43.177 12:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:43.177 12:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:43.177 12:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:43.177 12:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:43.177 12:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:43.177 12:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:43.177 12:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:43.177 12:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:43.177 12:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:43.177 12:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:43.177 12:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:43.177 12:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:43.437 12:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:43.437 12:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.437 12:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:43.437 12:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.437 12:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:21:43.437 00:21:43.437 real 0m32.069s 00:21:43.437 user 1m0.830s 00:21:43.437 sys 0m9.140s 00:21:43.437 12:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:43.437 12:41:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:43.437 ************************************ 00:21:43.438 END TEST nvmf_digest 00:21:43.438 ************************************ 00:21:43.438 12:41:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:21:43.438 12:41:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:21:43.438 12:41:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:43.438 12:41:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:43.438 12:41:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:43.438 12:41:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.438 ************************************ 00:21:43.438 START TEST nvmf_host_multipath 00:21:43.438 ************************************ 00:21:43.438 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:43.438 * Looking for test storage... 00:21:43.438 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:43.438 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:43.438 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:21:43.438 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:43.438 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:43.438 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:43.438 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:43.438 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:43.438 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:21:43.438 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:21:43.438 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:21:43.438 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:21:43.438 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:21:43.438 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:21:43.438 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:21:43.438 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:43.438 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:21:43.438 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:21:43.438 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:43.438 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:43.438 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:21:43.438 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:21:43.438 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:43.438 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:21:43.438 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:21:43.438 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:21:43.438 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:21:43.438 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:43.438 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:21:43.438 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:21:43.438 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:43.438 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:43.438 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:21:43.438 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:43.438 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:43.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.438 --rc genhtml_branch_coverage=1 00:21:43.438 --rc genhtml_function_coverage=1 00:21:43.438 --rc genhtml_legend=1 00:21:43.438 --rc geninfo_all_blocks=1 00:21:43.438 --rc geninfo_unexecuted_blocks=1 00:21:43.438 00:21:43.438 ' 00:21:43.699 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:43.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.699 --rc genhtml_branch_coverage=1 00:21:43.699 --rc genhtml_function_coverage=1 00:21:43.699 --rc genhtml_legend=1 00:21:43.699 --rc geninfo_all_blocks=1 00:21:43.699 --rc geninfo_unexecuted_blocks=1 00:21:43.699 00:21:43.699 ' 00:21:43.699 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:43.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.699 --rc genhtml_branch_coverage=1 00:21:43.699 --rc genhtml_function_coverage=1 00:21:43.699 --rc genhtml_legend=1 00:21:43.699 --rc geninfo_all_blocks=1 00:21:43.699 --rc geninfo_unexecuted_blocks=1 00:21:43.699 00:21:43.699 ' 00:21:43.699 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:43.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:43.699 --rc genhtml_branch_coverage=1 00:21:43.699 --rc genhtml_function_coverage=1 00:21:43.699 --rc genhtml_legend=1 00:21:43.699 --rc geninfo_all_blocks=1 00:21:43.699 --rc geninfo_unexecuted_blocks=1 00:21:43.699 00:21:43.699 ' 00:21:43.699 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:43.699 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:21:43.699 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:43.699 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:43.699 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:43.699 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:43.699 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:43.699 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:43.699 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:43.699 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:43.699 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:43.699 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:43.699 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:21:43.699 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:21:43.699 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:43.699 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:43.699 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:43.699 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:43.699 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:43.699 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:21:43.699 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:43.699 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:43.699 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:43.699 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.699 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.699 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.699 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:21:43.699 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.699 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:21:43.699 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:43.699 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:43.699 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:43.699 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:43.699 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:43.699 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:43.699 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:43.699 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:43.699 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:43.699 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@456 -- # nvmf_veth_init 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:43.700 Cannot find device "nvmf_init_br" 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:43.700 Cannot find device "nvmf_init_br2" 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:43.700 Cannot find device "nvmf_tgt_br" 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:43.700 Cannot find device "nvmf_tgt_br2" 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:43.700 Cannot find device "nvmf_init_br" 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:43.700 Cannot find device "nvmf_init_br2" 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:43.700 Cannot find device "nvmf_tgt_br" 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:43.700 Cannot find device "nvmf_tgt_br2" 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:43.700 Cannot find device "nvmf_br" 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:43.700 Cannot find device "nvmf_init_if" 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:43.700 Cannot find device "nvmf_init_if2" 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:43.700 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:43.700 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:43.700 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:43.960 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:43.960 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:43.960 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:43.960 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:43.960 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:43.960 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:43.960 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:43.960 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:43.960 12:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:43.960 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:43.960 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:43.960 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:43.960 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:43.960 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:43.960 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:43.960 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:43.960 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:43.960 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:43.960 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:43.960 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:43.960 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:43.960 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:43.960 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:43.960 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:43.960 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:43.960 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:21:43.960 00:21:43.960 --- 10.0.0.3 ping statistics --- 00:21:43.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.960 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:21:43.960 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:43.960 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:43.960 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:21:43.960 00:21:43.960 --- 10.0.0.4 ping statistics --- 00:21:43.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.960 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:21:43.960 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:43.960 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:43.960 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:21:43.960 00:21:43.960 --- 10.0.0.1 ping statistics --- 00:21:43.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.960 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:21:43.960 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:43.960 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:43.960 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:21:43.960 00:21:43.960 --- 10.0.0.2 ping statistics --- 00:21:43.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.960 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:21:43.960 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:43.960 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@457 -- # return 0 00:21:43.960 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:21:43.960 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:43.960 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:21:43.960 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:21:43.960 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:43.960 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:21:43.960 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:21:43.960 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:21:43.961 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:21:43.961 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:43.961 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:43.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:43.961 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@505 -- # nvmfpid=96078 00:21:43.961 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:43.961 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@506 -- # waitforlisten 96078 00:21:43.961 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 96078 ']' 00:21:43.961 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:43.961 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:43.961 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:43.961 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:43.961 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:43.961 [2024-11-19 12:41:49.211116] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:21:43.961 [2024-11-19 12:41:49.211914] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:44.218 [2024-11-19 12:41:49.354743] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:44.218 [2024-11-19 12:41:49.399611] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:44.218 [2024-11-19 12:41:49.399993] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:44.218 [2024-11-19 12:41:49.400196] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:44.218 [2024-11-19 12:41:49.400467] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:44.218 [2024-11-19 12:41:49.400642] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:44.218 [2024-11-19 12:41:49.401019] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:44.218 [2024-11-19 12:41:49.401032] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:44.218 [2024-11-19 12:41:49.437842] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:44.478 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:44.478 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:21:44.478 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:21:44.478 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:44.478 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:44.478 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:44.478 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=96078 00:21:44.478 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:44.737 [2024-11-19 12:41:49.809917] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:44.737 12:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:44.997 Malloc0 00:21:44.997 12:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:45.256 12:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:45.515 12:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:45.774 [2024-11-19 12:41:50.839170] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:45.774 12:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:21:46.033 [2024-11-19 12:41:51.063260] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:21:46.033 12:41:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=96116 00:21:46.033 12:41:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:46.033 12:41:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:46.033 12:41:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 96116 /var/tmp/bdevperf.sock 00:21:46.033 12:41:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 96116 ']' 00:21:46.033 12:41:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:46.033 12:41:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:46.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:46.033 12:41:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:46.033 12:41:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:46.033 12:41:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:46.292 12:41:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:46.292 12:41:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:21:46.292 12:41:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:46.550 12:41:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:21:46.809 Nvme0n1 00:21:46.809 12:41:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:47.068 Nvme0n1 00:21:47.068 12:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:21:47.068 12:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:48.446 12:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:21:48.446 12:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:48.446 12:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:48.705 12:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:21:48.705 12:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96158 00:21:48.705 12:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96078 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:48.705 12:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:55.380 12:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:55.380 12:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:55.380 12:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:55.380 12:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:55.380 Attaching 4 probes... 00:21:55.380 @path[10.0.0.3, 4421]: 19816 00:21:55.380 @path[10.0.0.3, 4421]: 20654 00:21:55.380 @path[10.0.0.3, 4421]: 20659 00:21:55.380 @path[10.0.0.3, 4421]: 20544 00:21:55.380 @path[10.0.0.3, 4421]: 20374 00:21:55.380 12:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:55.380 12:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:55.380 12:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:55.380 12:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:55.380 12:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:55.380 12:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:55.380 12:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96158 00:21:55.380 12:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:55.380 12:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:21:55.380 12:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:55.380 12:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:21:55.639 12:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:21:55.639 12:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96274 00:21:55.639 12:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96078 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:55.639 12:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:02.208 12:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:02.208 12:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:22:02.208 12:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:22:02.208 12:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:02.208 Attaching 4 probes... 00:22:02.208 @path[10.0.0.3, 4420]: 19370 00:22:02.208 @path[10.0.0.3, 4420]: 19496 00:22:02.208 @path[10.0.0.3, 4420]: 19989 00:22:02.208 @path[10.0.0.3, 4420]: 20414 00:22:02.208 @path[10.0.0.3, 4420]: 20448 00:22:02.208 12:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:02.208 12:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:02.208 12:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:02.208 12:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:22:02.208 12:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:22:02.208 12:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:22:02.208 12:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96274 00:22:02.208 12:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:02.208 12:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:22:02.208 12:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:22:02.208 12:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:02.467 12:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:22:02.467 12:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96386 00:22:02.467 12:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96078 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:02.467 12:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:09.035 12:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:09.035 12:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:09.035 12:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:09.035 12:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:09.035 Attaching 4 probes... 00:22:09.035 @path[10.0.0.3, 4421]: 15048 00:22:09.035 @path[10.0.0.3, 4421]: 20359 00:22:09.035 @path[10.0.0.3, 4421]: 20522 00:22:09.035 @path[10.0.0.3, 4421]: 20793 00:22:09.035 @path[10.0.0.3, 4421]: 20669 00:22:09.035 12:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:09.035 12:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:09.035 12:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:09.035 12:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:09.035 12:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:09.035 12:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:09.035 12:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96386 00:22:09.035 12:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:09.035 12:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:22:09.035 12:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:22:09.035 12:42:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:22:09.294 12:42:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:22:09.294 12:42:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96078 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:09.294 12:42:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96502 00:22:09.294 12:42:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:15.862 12:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:15.862 12:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:22:15.862 12:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:22:15.862 12:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:15.862 Attaching 4 probes... 00:22:15.862 00:22:15.862 00:22:15.862 00:22:15.862 00:22:15.862 00:22:15.862 12:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:15.862 12:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:15.862 12:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:15.862 12:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:22:15.862 12:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:22:15.862 12:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:22:15.862 12:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96502 00:22:15.862 12:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:15.862 12:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:22:15.862 12:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:15.862 12:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:16.121 12:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:22:16.121 12:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96620 00:22:16.121 12:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96078 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:16.121 12:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:22.738 12:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:22.738 12:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:22.738 12:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:22.738 12:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:22.738 Attaching 4 probes... 00:22:22.738 @path[10.0.0.3, 4421]: 19488 00:22:22.738 @path[10.0.0.3, 4421]: 19936 00:22:22.738 @path[10.0.0.3, 4421]: 19966 00:22:22.738 @path[10.0.0.3, 4421]: 19812 00:22:22.738 @path[10.0.0.3, 4421]: 19968 00:22:22.738 12:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:22.738 12:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:22.738 12:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:22.738 12:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:22.738 12:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:22.738 12:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:22.738 12:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96620 00:22:22.738 12:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:22.738 12:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:22.738 12:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:22:23.676 12:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:22:23.676 12:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96738 00:22:23.676 12:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96078 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:23.676 12:42:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:30.242 12:42:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:30.243 12:42:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:22:30.243 12:42:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:22:30.243 12:42:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:30.243 Attaching 4 probes... 00:22:30.243 @path[10.0.0.3, 4420]: 19280 00:22:30.243 @path[10.0.0.3, 4420]: 19898 00:22:30.243 @path[10.0.0.3, 4420]: 19631 00:22:30.243 @path[10.0.0.3, 4420]: 19725 00:22:30.243 @path[10.0.0.3, 4420]: 19696 00:22:30.243 12:42:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:30.243 12:42:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:30.243 12:42:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:30.243 12:42:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:22:30.243 12:42:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:22:30.243 12:42:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:22:30.243 12:42:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96738 00:22:30.243 12:42:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:30.243 12:42:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:30.243 [2024-11-19 12:42:35.283041] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:30.243 12:42:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:30.501 12:42:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:22:37.077 12:42:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:22:37.077 12:42:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96918 00:22:37.077 12:42:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96078 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:37.077 12:42:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:42.348 12:42:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:42.348 12:42:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:42.607 12:42:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:42.607 12:42:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:42.607 Attaching 4 probes... 00:22:42.608 @path[10.0.0.3, 4421]: 19471 00:22:42.608 @path[10.0.0.3, 4421]: 19741 00:22:42.608 @path[10.0.0.3, 4421]: 19727 00:22:42.608 @path[10.0.0.3, 4421]: 19927 00:22:42.608 @path[10.0.0.3, 4421]: 19969 00:22:42.608 12:42:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:42.608 12:42:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:42.608 12:42:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:42.608 12:42:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:42.608 12:42:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:42.608 12:42:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:42.608 12:42:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96918 00:22:42.608 12:42:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:42.608 12:42:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 96116 00:22:42.608 12:42:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 96116 ']' 00:22:42.608 12:42:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 96116 00:22:42.608 12:42:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:22:42.608 12:42:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:42.608 12:42:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96116 00:22:42.876 killing process with pid 96116 00:22:42.876 12:42:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:42.876 12:42:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:42.876 12:42:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96116' 00:22:42.876 12:42:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 96116 00:22:42.876 12:42:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 96116 00:22:42.876 { 00:22:42.876 "results": [ 00:22:42.876 { 00:22:42.876 "job": "Nvme0n1", 00:22:42.876 "core_mask": "0x4", 00:22:42.876 "workload": "verify", 00:22:42.876 "status": "terminated", 00:22:42.876 "verify_range": { 00:22:42.876 "start": 0, 00:22:42.876 "length": 16384 00:22:42.876 }, 00:22:42.876 "queue_depth": 128, 00:22:42.876 "io_size": 4096, 00:22:42.876 "runtime": 55.450919, 00:22:42.876 "iops": 8481.374312299495, 00:22:42.876 "mibps": 33.1303684074199, 00:22:42.876 "io_failed": 0, 00:22:42.876 "io_timeout": 0, 00:22:42.876 "avg_latency_us": 15062.182627382908, 00:22:42.876 "min_latency_us": 366.7781818181818, 00:22:42.876 "max_latency_us": 7015926.69090909 00:22:42.876 } 00:22:42.876 ], 00:22:42.876 "core_count": 1 00:22:42.876 } 00:22:42.876 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 96116 00:22:42.876 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:42.876 [2024-11-19 12:41:51.138059] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:42.876 [2024-11-19 12:41:51.138162] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96116 ] 00:22:42.876 [2024-11-19 12:41:51.279769] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.876 [2024-11-19 12:41:51.322527] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:42.876 [2024-11-19 12:41:51.355917] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:42.876 [2024-11-19 12:41:52.255227] bdev_nvme.c:5605:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:22:42.876 Running I/O for 90 seconds... 00:22:42.876 7828.00 IOPS, 30.58 MiB/s [2024-11-19T12:42:48.136Z] 8822.00 IOPS, 34.46 MiB/s [2024-11-19T12:42:48.136Z] 9252.00 IOPS, 36.14 MiB/s [2024-11-19T12:42:48.136Z] 9525.00 IOPS, 37.21 MiB/s [2024-11-19T12:42:48.136Z] 9686.80 IOPS, 37.84 MiB/s [2024-11-19T12:42:48.136Z] 9784.67 IOPS, 38.22 MiB/s [2024-11-19T12:42:48.136Z] 9842.29 IOPS, 38.45 MiB/s [2024-11-19T12:42:48.136Z] 9847.50 IOPS, 38.47 MiB/s [2024-11-19T12:42:48.136Z] [2024-11-19 12:42:00.688772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:126688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.876 [2024-11-19 12:42:00.688835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:42.876 [2024-11-19 12:42:00.688884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:126696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.876 [2024-11-19 12:42:00.688904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:42.876 [2024-11-19 12:42:00.688925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:126704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.876 [2024-11-19 12:42:00.688939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:42.876 [2024-11-19 12:42:00.688958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:126712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.876 [2024-11-19 12:42:00.688971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:42.876 [2024-11-19 12:42:00.688990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:126720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.876 [2024-11-19 12:42:00.689003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:42.876 [2024-11-19 12:42:00.689021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:126728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.876 [2024-11-19 12:42:00.689035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:42.876 [2024-11-19 12:42:00.689053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:126736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.876 [2024-11-19 12:42:00.689066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:42.876 [2024-11-19 12:42:00.689084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:126744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.876 [2024-11-19 12:42:00.689097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:42.876 [2024-11-19 12:42:00.689115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:126752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.876 [2024-11-19 12:42:00.689128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.876 [2024-11-19 12:42:00.689169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:126760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.876 [2024-11-19 12:42:00.689184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.876 [2024-11-19 12:42:00.689202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:126768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.876 [2024-11-19 12:42:00.689215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:42.876 [2024-11-19 12:42:00.689233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:126776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.877 [2024-11-19 12:42:00.689246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:42.877 [2024-11-19 12:42:00.689264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:126784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.877 [2024-11-19 12:42:00.689277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:42.877 [2024-11-19 12:42:00.689295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:126792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.877 [2024-11-19 12:42:00.689308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:42.877 [2024-11-19 12:42:00.689326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:126800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.877 [2024-11-19 12:42:00.689339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:42.877 [2024-11-19 12:42:00.689357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:126808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.877 [2024-11-19 12:42:00.689371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:42.877 [2024-11-19 12:42:00.689389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:126368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.877 [2024-11-19 12:42:00.689403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:42.877 [2024-11-19 12:42:00.689422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:126376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.877 [2024-11-19 12:42:00.689435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:42.877 [2024-11-19 12:42:00.689453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:126384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.877 [2024-11-19 12:42:00.689466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:42.877 [2024-11-19 12:42:00.689484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:126392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.877 [2024-11-19 12:42:00.689497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:42.877 [2024-11-19 12:42:00.689516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:126400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.877 [2024-11-19 12:42:00.689530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:42.877 [2024-11-19 12:42:00.689555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:126408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.877 [2024-11-19 12:42:00.689569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:42.877 [2024-11-19 12:42:00.689588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:126416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.877 [2024-11-19 12:42:00.689601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:42.877 [2024-11-19 12:42:00.689620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:126424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.877 [2024-11-19 12:42:00.689649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:42.877 [2024-11-19 12:42:00.689804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:126816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.877 [2024-11-19 12:42:00.689829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:42.877 [2024-11-19 12:42:00.689852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:126824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.877 [2024-11-19 12:42:00.689868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:42.877 [2024-11-19 12:42:00.689888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:126832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.877 [2024-11-19 12:42:00.689902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:42.877 [2024-11-19 12:42:00.689922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:126840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.877 [2024-11-19 12:42:00.689936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:42.877 [2024-11-19 12:42:00.689956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:126848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.877 [2024-11-19 12:42:00.689970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:42.877 [2024-11-19 12:42:00.689990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:126856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.877 [2024-11-19 12:42:00.690019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:42.877 [2024-11-19 12:42:00.690038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:126864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.877 [2024-11-19 12:42:00.690052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:42.877 [2024-11-19 12:42:00.690072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:126872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.877 [2024-11-19 12:42:00.690101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:42.877 [2024-11-19 12:42:00.692181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:126880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.877 [2024-11-19 12:42:00.692216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:42.877 [2024-11-19 12:42:00.692244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:126888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.877 [2024-11-19 12:42:00.692271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:42.877 [2024-11-19 12:42:00.692293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:126896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.877 [2024-11-19 12:42:00.692307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:42.877 [2024-11-19 12:42:00.692325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.877 [2024-11-19 12:42:00.692339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:42.877 [2024-11-19 12:42:00.692357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:126912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.877 [2024-11-19 12:42:00.692371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:42.877 [2024-11-19 12:42:00.692390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.877 [2024-11-19 12:42:00.692404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:42.877 [2024-11-19 12:42:00.692422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:126928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.877 [2024-11-19 12:42:00.692436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:42.877 [2024-11-19 12:42:00.692455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:126936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.877 [2024-11-19 12:42:00.692469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:42.877 [2024-11-19 12:42:00.692488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:126432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.877 [2024-11-19 12:42:00.692501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.877 [2024-11-19 12:42:00.692520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:126440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.877 [2024-11-19 12:42:00.692533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.877 [2024-11-19 12:42:00.692552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:126448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.877 [2024-11-19 12:42:00.692565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:42.877 [2024-11-19 12:42:00.692584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:126456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.877 [2024-11-19 12:42:00.692597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:42.877 [2024-11-19 12:42:00.692615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:126464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.877 [2024-11-19 12:42:00.692629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:42.877 [2024-11-19 12:42:00.692647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:126472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.877 [2024-11-19 12:42:00.692680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:42.877 [2024-11-19 12:42:00.692704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:126480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.877 [2024-11-19 12:42:00.692718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:42.877 [2024-11-19 12:42:00.692737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:126488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.877 [2024-11-19 12:42:00.692750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:42.877 [2024-11-19 12:42:00.692769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:126496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.877 [2024-11-19 12:42:00.692782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:42.878 [2024-11-19 12:42:00.692801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:126504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.878 [2024-11-19 12:42:00.692814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:42.878 [2024-11-19 12:42:00.692833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:126512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.878 [2024-11-19 12:42:00.692846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:42.878 [2024-11-19 12:42:00.692865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:126520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.878 [2024-11-19 12:42:00.692878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:42.878 [2024-11-19 12:42:00.692897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:126528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.878 [2024-11-19 12:42:00.692910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:42.878 [2024-11-19 12:42:00.692929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:126536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.878 [2024-11-19 12:42:00.692942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:42.878 [2024-11-19 12:42:00.692960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:126544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.878 [2024-11-19 12:42:00.692974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:42.878 [2024-11-19 12:42:00.692993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:126552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.878 [2024-11-19 12:42:00.693007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:42.878 [2024-11-19 12:42:00.694952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:126944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.878 [2024-11-19 12:42:00.694987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:42.878 [2024-11-19 12:42:00.695015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:126952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.878 [2024-11-19 12:42:00.695042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:42.878 [2024-11-19 12:42:00.695094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:126960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.878 [2024-11-19 12:42:00.695109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:42.878 [2024-11-19 12:42:00.695127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:126968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.878 [2024-11-19 12:42:00.695141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:42.878 [2024-11-19 12:42:00.695159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:126976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.878 [2024-11-19 12:42:00.695173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:42.878 [2024-11-19 12:42:00.695191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:126984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.878 [2024-11-19 12:42:00.695205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:42.878 [2024-11-19 12:42:00.695223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:126992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.878 [2024-11-19 12:42:00.695236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:42.878 [2024-11-19 12:42:00.695254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:127000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.878 [2024-11-19 12:42:00.695268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:42.878 [2024-11-19 12:42:00.695286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:127008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.878 [2024-11-19 12:42:00.695299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:42.878 [2024-11-19 12:42:00.695362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:127016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.878 [2024-11-19 12:42:00.695378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:42.878 [2024-11-19 12:42:00.695398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:127024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.878 [2024-11-19 12:42:00.695412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:42.878 [2024-11-19 12:42:00.695432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:127032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.878 [2024-11-19 12:42:00.695446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:42.878 [2024-11-19 12:42:00.695465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:127040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.878 [2024-11-19 12:42:00.695479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:42.878 [2024-11-19 12:42:00.695498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:127048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.878 [2024-11-19 12:42:00.695513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:42.878 [2024-11-19 12:42:00.695540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:127056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.878 [2024-11-19 12:42:00.695555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:42.878 [2024-11-19 12:42:00.695574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:127064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.878 [2024-11-19 12:42:00.695588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:42.878 [2024-11-19 12:42:00.695608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:126560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.878 [2024-11-19 12:42:00.695623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.878 [2024-11-19 12:42:00.695658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:126568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.878 [2024-11-19 12:42:00.695672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.878 [2024-11-19 12:42:00.695705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:126576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.878 [2024-11-19 12:42:00.695718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:42.878 [2024-11-19 12:42:00.695764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:126584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.878 [2024-11-19 12:42:00.695784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:42.878 [2024-11-19 12:42:00.695803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:126592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.878 [2024-11-19 12:42:00.695817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:42.878 [2024-11-19 12:42:00.695836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:126600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.878 [2024-11-19 12:42:00.695849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:42.878 [2024-11-19 12:42:00.695868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:126608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.878 [2024-11-19 12:42:00.695881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:42.878 [2024-11-19 12:42:00.695899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:126616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.878 [2024-11-19 12:42:00.695913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:42.878 [2024-11-19 12:42:00.695937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:127072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.878 [2024-11-19 12:42:00.695953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:42.878 [2024-11-19 12:42:00.695972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:127080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.878 [2024-11-19 12:42:00.695986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:42.878 [2024-11-19 12:42:00.696014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:127088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.878 [2024-11-19 12:42:00.696028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:42.878 [2024-11-19 12:42:00.696046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:127096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.878 [2024-11-19 12:42:00.696060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:42.878 [2024-11-19 12:42:00.696078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:127104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.878 [2024-11-19 12:42:00.696091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:42.878 [2024-11-19 12:42:00.696109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:127112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.878 [2024-11-19 12:42:00.696123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:42.878 [2024-11-19 12:42:00.696141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:127120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.878 [2024-11-19 12:42:00.696154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:42.878 [2024-11-19 12:42:00.696172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:127128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.879 [2024-11-19 12:42:00.696186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:42.879 [2024-11-19 12:42:00.696221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:127136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.879 [2024-11-19 12:42:00.696248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:42.879 [2024-11-19 12:42:00.696272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:127144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.879 [2024-11-19 12:42:00.696287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:42.879 [2024-11-19 12:42:00.696311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.879 [2024-11-19 12:42:00.696327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:42.879 [2024-11-19 12:42:00.696346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.879 [2024-11-19 12:42:00.696359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:42.879 [2024-11-19 12:42:00.696377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:127168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.879 [2024-11-19 12:42:00.696391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:42.879 [2024-11-19 12:42:00.696409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:127176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.879 [2024-11-19 12:42:00.696422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:42.879 [2024-11-19 12:42:00.696441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:127184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.879 [2024-11-19 12:42:00.696468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:42.879 [2024-11-19 12:42:00.696490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:127192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.879 [2024-11-19 12:42:00.696505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:42.879 [2024-11-19 12:42:00.697843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:127200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.879 [2024-11-19 12:42:00.697871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:42.879 [2024-11-19 12:42:00.697896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.879 [2024-11-19 12:42:00.697911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:42.879 [2024-11-19 12:42:00.697930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:127216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.879 [2024-11-19 12:42:00.697944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:42.879 [2024-11-19 12:42:00.697963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:127224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.879 [2024-11-19 12:42:00.697976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:42.879 [2024-11-19 12:42:00.697995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:126624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.879 [2024-11-19 12:42:00.698008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:42.879 [2024-11-19 12:42:00.698027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:126632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.879 [2024-11-19 12:42:00.698041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:42.879 [2024-11-19 12:42:00.698059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:126640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.879 [2024-11-19 12:42:00.698072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:42.879 [2024-11-19 12:42:00.698091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:126648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.879 [2024-11-19 12:42:00.698104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.879 [2024-11-19 12:42:00.698123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:126656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.879 [2024-11-19 12:42:00.698136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.879 [2024-11-19 12:42:00.698155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:126664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.879 [2024-11-19 12:42:00.698169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.879 [2024-11-19 12:42:00.698187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:126672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.879 [2024-11-19 12:42:00.698211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:42.879 [2024-11-19 12:42:00.698231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:126680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.879 [2024-11-19 12:42:00.698247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:42.879 [2024-11-19 12:42:00.698265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:127232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.879 [2024-11-19 12:42:00.698279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:42.879 [2024-11-19 12:42:00.698297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:127240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.879 [2024-11-19 12:42:00.698311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:42.879 [2024-11-19 12:42:00.698330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:127248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.879 [2024-11-19 12:42:00.698344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:42.879 [2024-11-19 12:42:00.698363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:127256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.879 [2024-11-19 12:42:00.698378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:42.879 [2024-11-19 12:42:00.701933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:127264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.879 [2024-11-19 12:42:00.701968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:42.879 [2024-11-19 12:42:00.701996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:127272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.879 [2024-11-19 12:42:00.702013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:42.879 [2024-11-19 12:42:00.702032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:127280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.879 [2024-11-19 12:42:00.702047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:42.879 [2024-11-19 12:42:00.702066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:127288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.879 [2024-11-19 12:42:00.702080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:42.879 [2024-11-19 12:42:00.702099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:127296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.879 [2024-11-19 12:42:00.702114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:42.879 [2024-11-19 12:42:00.702133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:127304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.879 [2024-11-19 12:42:00.702146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:42.879 [2024-11-19 12:42:00.702165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:127312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.879 [2024-11-19 12:42:00.702192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:42.879 [2024-11-19 12:42:00.702214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:127320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.879 [2024-11-19 12:42:00.702229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:42.879 [2024-11-19 12:42:00.702248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:127328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.879 [2024-11-19 12:42:00.702262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:42.879 [2024-11-19 12:42:00.702281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:127336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.879 [2024-11-19 12:42:00.702295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:42.879 [2024-11-19 12:42:00.702314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:127344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.879 [2024-11-19 12:42:00.702328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:42.879 [2024-11-19 12:42:00.702346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:127352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.879 [2024-11-19 12:42:00.702360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:42.879 [2024-11-19 12:42:00.702379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:127360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.879 [2024-11-19 12:42:00.702394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:42.879 [2024-11-19 12:42:00.702413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:127368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.879 [2024-11-19 12:42:00.702427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:42.879 9797.00 IOPS, 38.27 MiB/s [2024-11-19T12:42:48.140Z] 9797.60 IOPS, 38.27 MiB/s [2024-11-19T12:42:48.140Z] 9807.00 IOPS, 38.31 MiB/s [2024-11-19T12:42:48.140Z] 9831.75 IOPS, 38.41 MiB/s [2024-11-19T12:42:48.140Z] 9861.31 IOPS, 38.52 MiB/s [2024-11-19T12:42:48.140Z] 9883.79 IOPS, 38.61 MiB/s [2024-11-19T12:42:48.140Z] [2024-11-19 12:42:07.236279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:123688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.880 [2024-11-19 12:42:07.236328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:42.880 [2024-11-19 12:42:07.236380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:123696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.880 [2024-11-19 12:42:07.236400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:42.880 [2024-11-19 12:42:07.236421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:123704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.880 [2024-11-19 12:42:07.236435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:42.880 [2024-11-19 12:42:07.236453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:123712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.880 [2024-11-19 12:42:07.236467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.880 [2024-11-19 12:42:07.236486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:123720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.880 [2024-11-19 12:42:07.236519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.880 [2024-11-19 12:42:07.236540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:123728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.880 [2024-11-19 12:42:07.236554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.880 [2024-11-19 12:42:07.236573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:123736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.880 [2024-11-19 12:42:07.236586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:42.880 [2024-11-19 12:42:07.236604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:123744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.880 [2024-11-19 12:42:07.236617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:42.880 [2024-11-19 12:42:07.236636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:123240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.880 [2024-11-19 12:42:07.236649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:42.880 [2024-11-19 12:42:07.236680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:123248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.880 [2024-11-19 12:42:07.236713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:42.880 [2024-11-19 12:42:07.236732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:123256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.880 [2024-11-19 12:42:07.236746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:42.880 [2024-11-19 12:42:07.236764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:123264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.880 [2024-11-19 12:42:07.236778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:42.880 [2024-11-19 12:42:07.236797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:123272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.880 [2024-11-19 12:42:07.236810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:42.880 [2024-11-19 12:42:07.236829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:123280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.880 [2024-11-19 12:42:07.236842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:42.880 [2024-11-19 12:42:07.236861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:123288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.880 [2024-11-19 12:42:07.236875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:42.880 [2024-11-19 12:42:07.236893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:123296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.880 [2024-11-19 12:42:07.236906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:42.880 [2024-11-19 12:42:07.236925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:123304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.880 [2024-11-19 12:42:07.236947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:42.880 [2024-11-19 12:42:07.236968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:123312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.880 [2024-11-19 12:42:07.236982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:42.880 [2024-11-19 12:42:07.237001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:123320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.880 [2024-11-19 12:42:07.237015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:42.880 [2024-11-19 12:42:07.237033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:123328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.880 [2024-11-19 12:42:07.237047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:42.880 [2024-11-19 12:42:07.237081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:123336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.880 [2024-11-19 12:42:07.237095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:42.880 [2024-11-19 12:42:07.237113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:123344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.880 [2024-11-19 12:42:07.237127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:42.880 [2024-11-19 12:42:07.237145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:123352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.880 [2024-11-19 12:42:07.237160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:42.880 [2024-11-19 12:42:07.237178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:123360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.880 [2024-11-19 12:42:07.237192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:42.880 [2024-11-19 12:42:07.237404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:123752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.880 [2024-11-19 12:42:07.237427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:42.880 [2024-11-19 12:42:07.237448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:123760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.880 [2024-11-19 12:42:07.237463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:42.880 [2024-11-19 12:42:07.237481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.880 [2024-11-19 12:42:07.237495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:42.880 [2024-11-19 12:42:07.237513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:123776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.880 [2024-11-19 12:42:07.237527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:42.880 [2024-11-19 12:42:07.237546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.880 [2024-11-19 12:42:07.237568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:42.880 [2024-11-19 12:42:07.237588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:123792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.880 [2024-11-19 12:42:07.237602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:42.880 [2024-11-19 12:42:07.237621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:123800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.880 [2024-11-19 12:42:07.237635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:42.880 [2024-11-19 12:42:07.237653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:123808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.881 [2024-11-19 12:42:07.237666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:42.881 [2024-11-19 12:42:07.237703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:123368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.881 [2024-11-19 12:42:07.237730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:42.881 [2024-11-19 12:42:07.237751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:123376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.881 [2024-11-19 12:42:07.237765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:42.881 [2024-11-19 12:42:07.237785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:123384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.881 [2024-11-19 12:42:07.237798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:42.881 [2024-11-19 12:42:07.237818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.881 [2024-11-19 12:42:07.237831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:42.881 [2024-11-19 12:42:07.237852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:123400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.881 [2024-11-19 12:42:07.237866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.881 [2024-11-19 12:42:07.237885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:123408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.881 [2024-11-19 12:42:07.237899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.881 [2024-11-19 12:42:07.237918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:123416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.881 [2024-11-19 12:42:07.237932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:42.881 [2024-11-19 12:42:07.237951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:123424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.881 [2024-11-19 12:42:07.237965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:42.881 [2024-11-19 12:42:07.237984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:123816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.881 [2024-11-19 12:42:07.237997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:42.881 [2024-11-19 12:42:07.238041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:123824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.881 [2024-11-19 12:42:07.238056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:42.881 [2024-11-19 12:42:07.238075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:123832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.881 [2024-11-19 12:42:07.238088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:42.881 [2024-11-19 12:42:07.238107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:123840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.881 [2024-11-19 12:42:07.238120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:42.881 [2024-11-19 12:42:07.238139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:123848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.881 [2024-11-19 12:42:07.238153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:42.881 [2024-11-19 12:42:07.238172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:123856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.881 [2024-11-19 12:42:07.238186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:42.881 [2024-11-19 12:42:07.238205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:123864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.881 [2024-11-19 12:42:07.238218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:42.881 [2024-11-19 12:42:07.238236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:123872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.881 [2024-11-19 12:42:07.238250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:42.881 [2024-11-19 12:42:07.238285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:123880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.881 [2024-11-19 12:42:07.238304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:42.881 [2024-11-19 12:42:07.238324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:123888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.881 [2024-11-19 12:42:07.238338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:42.881 [2024-11-19 12:42:07.238356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:123896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.881 [2024-11-19 12:42:07.238370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:42.881 [2024-11-19 12:42:07.238389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:123904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.881 [2024-11-19 12:42:07.238402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:42.881 [2024-11-19 12:42:07.238422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:123912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.881 [2024-11-19 12:42:07.238436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:42.881 [2024-11-19 12:42:07.238467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:123920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.881 [2024-11-19 12:42:07.238482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:42.881 [2024-11-19 12:42:07.238501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:123928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.881 [2024-11-19 12:42:07.238514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:42.881 [2024-11-19 12:42:07.238533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:123936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.881 [2024-11-19 12:42:07.238546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:42.881 [2024-11-19 12:42:07.238565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:123432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.881 [2024-11-19 12:42:07.238578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:42.881 [2024-11-19 12:42:07.238596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:123440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.881 [2024-11-19 12:42:07.238609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:42.881 [2024-11-19 12:42:07.238628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:123448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.881 [2024-11-19 12:42:07.238642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:42.881 [2024-11-19 12:42:07.238660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:123456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.881 [2024-11-19 12:42:07.238673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:42.881 [2024-11-19 12:42:07.238708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:123464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.881 [2024-11-19 12:42:07.238723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:42.881 [2024-11-19 12:42:07.238743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:123472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.881 [2024-11-19 12:42:07.238756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:42.881 [2024-11-19 12:42:07.238775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:123480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.881 [2024-11-19 12:42:07.238789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:42.881 [2024-11-19 12:42:07.238807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:123488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.881 [2024-11-19 12:42:07.238820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:42.881 [2024-11-19 12:42:07.238839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:123496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.881 [2024-11-19 12:42:07.238852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:42.881 [2024-11-19 12:42:07.238871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:123504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.881 [2024-11-19 12:42:07.238891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:42.881 [2024-11-19 12:42:07.238911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:123512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.881 [2024-11-19 12:42:07.238925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:42.881 [2024-11-19 12:42:07.238944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:123520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.881 [2024-11-19 12:42:07.238957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:42.881 [2024-11-19 12:42:07.238976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:123528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.882 [2024-11-19 12:42:07.238990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.882 [2024-11-19 12:42:07.239008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:123536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.882 [2024-11-19 12:42:07.239028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.882 [2024-11-19 12:42:07.239048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:123544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.882 [2024-11-19 12:42:07.239062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:42.882 [2024-11-19 12:42:07.239081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:123552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.882 [2024-11-19 12:42:07.239094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:42.882 [2024-11-19 12:42:07.239113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:123560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.882 [2024-11-19 12:42:07.239126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:42.882 [2024-11-19 12:42:07.239145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:123568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.882 [2024-11-19 12:42:07.239158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:42.882 [2024-11-19 12:42:07.239176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:123576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.882 [2024-11-19 12:42:07.239190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:42.882 [2024-11-19 12:42:07.239208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:123584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.882 [2024-11-19 12:42:07.239222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:42.882 [2024-11-19 12:42:07.239240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:123592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.882 [2024-11-19 12:42:07.239254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:42.882 [2024-11-19 12:42:07.239273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:123600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.882 [2024-11-19 12:42:07.239292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:42.882 [2024-11-19 12:42:07.239337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:123608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.882 [2024-11-19 12:42:07.239371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:42.882 [2024-11-19 12:42:07.239391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:123616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.882 [2024-11-19 12:42:07.239405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:42.882 [2024-11-19 12:42:07.239425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:123944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.882 [2024-11-19 12:42:07.239439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:42.882 [2024-11-19 12:42:07.239459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:123952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.882 [2024-11-19 12:42:07.239473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:42.882 [2024-11-19 12:42:07.239492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:123960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.882 [2024-11-19 12:42:07.239506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:42.882 [2024-11-19 12:42:07.239526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:123968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.882 [2024-11-19 12:42:07.239540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:42.882 [2024-11-19 12:42:07.239560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:123976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.882 [2024-11-19 12:42:07.239574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:42.882 [2024-11-19 12:42:07.239593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:123984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.882 [2024-11-19 12:42:07.239610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:42.882 [2024-11-19 12:42:07.239630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:123992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.882 [2024-11-19 12:42:07.239645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:42.882 [2024-11-19 12:42:07.239664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:124000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.882 [2024-11-19 12:42:07.239678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:42.882 [2024-11-19 12:42:07.239723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:124008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.882 [2024-11-19 12:42:07.239740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:42.882 [2024-11-19 12:42:07.239760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:124016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.882 [2024-11-19 12:42:07.239781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:42.882 [2024-11-19 12:42:07.239801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:124024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.882 [2024-11-19 12:42:07.239815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:42.882 [2024-11-19 12:42:07.239834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:124032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.882 [2024-11-19 12:42:07.239848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:42.882 [2024-11-19 12:42:07.239866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:123624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.882 [2024-11-19 12:42:07.239880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:42.882 [2024-11-19 12:42:07.239899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:123632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.882 [2024-11-19 12:42:07.239912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:42.882 [2024-11-19 12:42:07.239931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:123640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.882 [2024-11-19 12:42:07.239945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:42.882 [2024-11-19 12:42:07.239963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:123648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.882 [2024-11-19 12:42:07.239977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:42.882 [2024-11-19 12:42:07.239996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:123656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.882 [2024-11-19 12:42:07.240009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:42.882 [2024-11-19 12:42:07.240028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:123664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.882 [2024-11-19 12:42:07.240042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:42.882 [2024-11-19 12:42:07.240061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:123672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.882 [2024-11-19 12:42:07.240075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:42.882 [2024-11-19 12:42:07.240695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:123680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.882 [2024-11-19 12:42:07.240722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:42.882 [2024-11-19 12:42:07.240754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:124040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.882 [2024-11-19 12:42:07.240770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.882 [2024-11-19 12:42:07.240795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:124048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.882 [2024-11-19 12:42:07.240811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.882 [2024-11-19 12:42:07.240848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:124056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.882 [2024-11-19 12:42:07.240863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:42.882 [2024-11-19 12:42:07.240888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:124064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.882 [2024-11-19 12:42:07.240901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:42.882 [2024-11-19 12:42:07.240925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:124072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.882 [2024-11-19 12:42:07.240939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:42.882 [2024-11-19 12:42:07.240963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:124080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.882 [2024-11-19 12:42:07.240977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:42.883 [2024-11-19 12:42:07.241001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:124088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.883 [2024-11-19 12:42:07.241015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:42.883 [2024-11-19 12:42:07.241056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:124096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.883 [2024-11-19 12:42:07.241075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:42.883 [2024-11-19 12:42:07.241101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:124104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.883 [2024-11-19 12:42:07.241115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:42.883 [2024-11-19 12:42:07.241140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:124112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.883 [2024-11-19 12:42:07.241153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:42.883 [2024-11-19 12:42:07.241177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:124120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.883 [2024-11-19 12:42:07.241191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:42.883 [2024-11-19 12:42:07.241215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:124128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.883 [2024-11-19 12:42:07.241229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:42.883 [2024-11-19 12:42:07.241253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:124136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.883 [2024-11-19 12:42:07.241267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:42.883 [2024-11-19 12:42:07.241291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:124144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.883 [2024-11-19 12:42:07.241305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:42.883 [2024-11-19 12:42:07.241339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:124152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.883 [2024-11-19 12:42:07.241354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:42.883 [2024-11-19 12:42:07.241382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:124160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.883 [2024-11-19 12:42:07.241398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:42.883 [2024-11-19 12:42:07.241423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:124168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.883 [2024-11-19 12:42:07.241438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:42.883 [2024-11-19 12:42:07.241462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:124176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.883 [2024-11-19 12:42:07.241479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:42.883 [2024-11-19 12:42:07.241504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:124184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.883 [2024-11-19 12:42:07.241518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:42.883 [2024-11-19 12:42:07.241543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:124192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.883 [2024-11-19 12:42:07.241556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:42.883 [2024-11-19 12:42:07.241580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:124200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.883 [2024-11-19 12:42:07.241594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:42.883 [2024-11-19 12:42:07.241618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:124208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.883 [2024-11-19 12:42:07.241632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:42.883 9765.13 IOPS, 38.15 MiB/s [2024-11-19T12:42:48.143Z] 9274.56 IOPS, 36.23 MiB/s [2024-11-19T12:42:48.143Z] 9325.47 IOPS, 36.43 MiB/s [2024-11-19T12:42:48.143Z] 9370.72 IOPS, 36.60 MiB/s [2024-11-19T12:42:48.143Z] 9422.79 IOPS, 36.81 MiB/s [2024-11-19T12:42:48.143Z] 9466.05 IOPS, 36.98 MiB/s [2024-11-19T12:42:48.143Z] 9507.10 IOPS, 37.14 MiB/s [2024-11-19T12:42:48.143Z] [2024-11-19 12:42:14.387992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:102152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.883 [2024-11-19 12:42:14.388044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:42.883 [2024-11-19 12:42:14.388110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:102160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.883 [2024-11-19 12:42:14.388130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:42.883 [2024-11-19 12:42:14.388150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:102168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.883 [2024-11-19 12:42:14.388164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:42.883 [2024-11-19 12:42:14.388182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:102176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.883 [2024-11-19 12:42:14.388227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:42.883 [2024-11-19 12:42:14.388252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:102184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.883 [2024-11-19 12:42:14.388267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:42.883 [2024-11-19 12:42:14.388285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:102192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.883 [2024-11-19 12:42:14.388298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:42.883 [2024-11-19 12:42:14.388317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:102200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.883 [2024-11-19 12:42:14.388330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:42.883 [2024-11-19 12:42:14.388348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:102208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.883 [2024-11-19 12:42:14.388361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:42.883 [2024-11-19 12:42:14.388379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:102216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.883 [2024-11-19 12:42:14.388392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:42.883 [2024-11-19 12:42:14.388410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:102224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.883 [2024-11-19 12:42:14.388423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:42.883 [2024-11-19 12:42:14.388441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:102232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.883 [2024-11-19 12:42:14.388454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:42.883 [2024-11-19 12:42:14.388472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:102240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.883 [2024-11-19 12:42:14.388485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:42.883 [2024-11-19 12:42:14.388503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:102248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.883 [2024-11-19 12:42:14.388516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:42.883 [2024-11-19 12:42:14.388533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:102256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.883 [2024-11-19 12:42:14.388546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:42.883 [2024-11-19 12:42:14.388564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:102264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.883 [2024-11-19 12:42:14.388578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:42.883 [2024-11-19 12:42:14.388597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:102272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.883 [2024-11-19 12:42:14.388627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:42.883 [2024-11-19 12:42:14.388659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:102280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.883 [2024-11-19 12:42:14.388691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:42.883 [2024-11-19 12:42:14.388728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:102288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.883 [2024-11-19 12:42:14.388745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:42.883 [2024-11-19 12:42:14.388765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:102296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.883 [2024-11-19 12:42:14.388779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:42.883 [2024-11-19 12:42:14.388798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:102304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.883 [2024-11-19 12:42:14.388812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:42.883 [2024-11-19 12:42:14.388831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:102312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.883 [2024-11-19 12:42:14.388845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:42.883 [2024-11-19 12:42:14.388864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:102320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.884 [2024-11-19 12:42:14.388878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:42.884 [2024-11-19 12:42:14.388897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:102328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.884 [2024-11-19 12:42:14.388911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:42.884 [2024-11-19 12:42:14.388931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:102336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.884 [2024-11-19 12:42:14.388944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:42.884 [2024-11-19 12:42:14.388963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:101704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.884 [2024-11-19 12:42:14.388977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:42.884 [2024-11-19 12:42:14.388997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:101712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.884 [2024-11-19 12:42:14.389011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:42.884 [2024-11-19 12:42:14.389030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:101720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.884 [2024-11-19 12:42:14.389044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:42.884 [2024-11-19 12:42:14.389064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:101728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.884 [2024-11-19 12:42:14.389093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:42.884 [2024-11-19 12:42:14.389120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:101736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.884 [2024-11-19 12:42:14.389135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:42.884 [2024-11-19 12:42:14.389154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:101744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.884 [2024-11-19 12:42:14.389168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:42.884 [2024-11-19 12:42:14.389186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:101752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.884 [2024-11-19 12:42:14.389200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:42.884 [2024-11-19 12:42:14.389219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:101760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.884 [2024-11-19 12:42:14.389232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:42.884 [2024-11-19 12:42:14.389251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:101768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.884 [2024-11-19 12:42:14.389265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:42.884 [2024-11-19 12:42:14.389283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:101776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.884 [2024-11-19 12:42:14.389297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:42.884 [2024-11-19 12:42:14.389316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:101784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.884 [2024-11-19 12:42:14.389329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:42.884 [2024-11-19 12:42:14.389348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:101792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.884 [2024-11-19 12:42:14.389361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:42.884 [2024-11-19 12:42:14.389380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:101800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.884 [2024-11-19 12:42:14.389393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:42.884 [2024-11-19 12:42:14.389412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.884 [2024-11-19 12:42:14.389425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:42.884 [2024-11-19 12:42:14.389444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:101816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.884 [2024-11-19 12:42:14.389457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:42.884 [2024-11-19 12:42:14.389476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:101824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.884 [2024-11-19 12:42:14.389489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:42.884 [2024-11-19 12:42:14.389513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:101832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.884 [2024-11-19 12:42:14.389528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:42.884 [2024-11-19 12:42:14.389547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:101840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.884 [2024-11-19 12:42:14.389561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:42.884 [2024-11-19 12:42:14.389579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:101848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.884 [2024-11-19 12:42:14.389593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:42.884 [2024-11-19 12:42:14.389611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:101856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.884 [2024-11-19 12:42:14.389625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:42.884 [2024-11-19 12:42:14.389643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:101864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.884 [2024-11-19 12:42:14.389657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:42.884 [2024-11-19 12:42:14.389676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:101872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.884 [2024-11-19 12:42:14.389701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:42.884 [2024-11-19 12:42:14.389738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:101880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.884 [2024-11-19 12:42:14.389753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:42.884 [2024-11-19 12:42:14.389773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:101888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.884 [2024-11-19 12:42:14.389787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:42.884 [2024-11-19 12:42:14.389806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:101896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.884 [2024-11-19 12:42:14.389836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:42.884 [2024-11-19 12:42:14.389871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:101904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.884 [2024-11-19 12:42:14.389884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:42.884 [2024-11-19 12:42:14.389903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:101912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.884 [2024-11-19 12:42:14.389917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:42.884 [2024-11-19 12:42:14.389936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:101920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.884 [2024-11-19 12:42:14.389968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:42.884 [2024-11-19 12:42:14.389987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:101928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.884 [2024-11-19 12:42:14.390009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:42.884 [2024-11-19 12:42:14.390031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:101936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.884 [2024-11-19 12:42:14.390045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:42.884 [2024-11-19 12:42:14.390065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:101944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.884 [2024-11-19 12:42:14.390095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:42.884 [2024-11-19 12:42:14.390114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:101952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.884 [2024-11-19 12:42:14.390128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:42.884 [2024-11-19 12:42:14.390151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:102344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.884 [2024-11-19 12:42:14.390167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:42.884 [2024-11-19 12:42:14.390202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:102352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.884 [2024-11-19 12:42:14.390216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:42.884 [2024-11-19 12:42:14.390235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:102360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.884 [2024-11-19 12:42:14.390248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:42.884 [2024-11-19 12:42:14.390267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:102368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.884 [2024-11-19 12:42:14.390281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:42.885 [2024-11-19 12:42:14.390299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:102376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.885 [2024-11-19 12:42:14.390313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:42.885 [2024-11-19 12:42:14.390331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:102384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.885 [2024-11-19 12:42:14.390345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:42.885 [2024-11-19 12:42:14.390363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:101960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.885 [2024-11-19 12:42:14.390377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:42.885 [2024-11-19 12:42:14.390396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:101968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.885 [2024-11-19 12:42:14.390410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:42.885 [2024-11-19 12:42:14.390428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:101976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.885 [2024-11-19 12:42:14.390448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:42.885 [2024-11-19 12:42:14.390485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.885 [2024-11-19 12:42:14.390500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:42.885 [2024-11-19 12:42:14.390519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:101992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.885 [2024-11-19 12:42:14.390533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:42.885 [2024-11-19 12:42:14.390552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:102000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.885 [2024-11-19 12:42:14.390566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:42.885 [2024-11-19 12:42:14.390585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:102008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.885 [2024-11-19 12:42:14.390599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:42.885 [2024-11-19 12:42:14.390618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:102016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.885 [2024-11-19 12:42:14.390632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:42.885 [2024-11-19 12:42:14.390651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:102392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.885 [2024-11-19 12:42:14.390665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:42.885 [2024-11-19 12:42:14.390684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:102400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.885 [2024-11-19 12:42:14.390698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:42.885 [2024-11-19 12:42:14.390716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:102408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.885 [2024-11-19 12:42:14.390730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:42.885 [2024-11-19 12:42:14.390749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:102416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.885 [2024-11-19 12:42:14.390779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:42.885 [2024-11-19 12:42:14.390813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.885 [2024-11-19 12:42:14.390830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:42.885 [2024-11-19 12:42:14.390849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:102432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.885 [2024-11-19 12:42:14.390862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:42.885 [2024-11-19 12:42:14.390881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:102440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.885 [2024-11-19 12:42:14.390895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:42.885 [2024-11-19 12:42:14.390920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:102448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.885 [2024-11-19 12:42:14.390936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:42.885 [2024-11-19 12:42:14.390954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:102456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.885 [2024-11-19 12:42:14.390972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:42.885 [2024-11-19 12:42:14.390992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:102464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.885 [2024-11-19 12:42:14.391005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:42.885 [2024-11-19 12:42:14.391024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:102024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.885 [2024-11-19 12:42:14.391038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:42.885 [2024-11-19 12:42:14.391056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:102032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.885 [2024-11-19 12:42:14.391070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:42.885 [2024-11-19 12:42:14.391088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:102040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.885 [2024-11-19 12:42:14.391102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:42.885 [2024-11-19 12:42:14.391120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:102048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.885 [2024-11-19 12:42:14.391133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:42.885 [2024-11-19 12:42:14.391152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:102056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.885 [2024-11-19 12:42:14.391165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:42.885 [2024-11-19 12:42:14.391183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:102064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.885 [2024-11-19 12:42:14.391197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:42.885 [2024-11-19 12:42:14.391216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:102072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.885 [2024-11-19 12:42:14.391229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:42.885 [2024-11-19 12:42:14.391248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:102080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.885 [2024-11-19 12:42:14.391262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:42.885 [2024-11-19 12:42:14.391284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:102472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.885 [2024-11-19 12:42:14.391300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:42.885 [2024-11-19 12:42:14.391353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:102480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.885 [2024-11-19 12:42:14.391370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:42.885 [2024-11-19 12:42:14.391389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:102488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.885 [2024-11-19 12:42:14.391404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:42.885 [2024-11-19 12:42:14.391423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:102496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.885 [2024-11-19 12:42:14.391438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:42.885 [2024-11-19 12:42:14.391457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:102504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.885 [2024-11-19 12:42:14.391471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:42.885 [2024-11-19 12:42:14.391491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.885 [2024-11-19 12:42:14.391505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:42.886 [2024-11-19 12:42:14.391525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:102520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.886 [2024-11-19 12:42:14.391542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:42.886 [2024-11-19 12:42:14.391562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:102528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.886 [2024-11-19 12:42:14.391577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:42.886 [2024-11-19 12:42:14.391597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:102536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.886 [2024-11-19 12:42:14.391611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:42.886 [2024-11-19 12:42:14.391631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:102544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.886 [2024-11-19 12:42:14.391660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:42.886 [2024-11-19 12:42:14.391679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:102552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.886 [2024-11-19 12:42:14.391736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:42.886 [2024-11-19 12:42:14.391761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:102560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.886 [2024-11-19 12:42:14.391776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:42.886 [2024-11-19 12:42:14.391795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:102568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.886 [2024-11-19 12:42:14.391808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:42.886 [2024-11-19 12:42:14.391835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:102576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.886 [2024-11-19 12:42:14.391850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:42.886 [2024-11-19 12:42:14.391870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:102584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.886 [2024-11-19 12:42:14.391884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:42.886 [2024-11-19 12:42:14.391903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:102592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.886 [2024-11-19 12:42:14.391917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:42.886 [2024-11-19 12:42:14.391936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:102600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.886 [2024-11-19 12:42:14.391950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:42.886 [2024-11-19 12:42:14.391969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:102608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.886 [2024-11-19 12:42:14.391983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:42.886 [2024-11-19 12:42:14.392002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:102616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.886 [2024-11-19 12:42:14.392016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:42.886 [2024-11-19 12:42:14.392035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:102624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.886 [2024-11-19 12:42:14.392049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:42.886 [2024-11-19 12:42:14.392068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:102632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.886 [2024-11-19 12:42:14.392081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:42.886 [2024-11-19 12:42:14.392100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:102640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.886 [2024-11-19 12:42:14.392114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:42.886 [2024-11-19 12:42:14.392148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:102648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.886 [2024-11-19 12:42:14.392164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:42.886 [2024-11-19 12:42:14.392183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:102088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.886 [2024-11-19 12:42:14.392197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:42.886 [2024-11-19 12:42:14.392215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:102096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.886 [2024-11-19 12:42:14.392229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:42.886 [2024-11-19 12:42:14.392247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:102104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.886 [2024-11-19 12:42:14.392270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.886 [2024-11-19 12:42:14.392290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:102112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.886 [2024-11-19 12:42:14.392304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.886 [2024-11-19 12:42:14.392322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.886 [2024-11-19 12:42:14.392336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.886 [2024-11-19 12:42:14.392354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:102128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.886 [2024-11-19 12:42:14.392368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:42.886 [2024-11-19 12:42:14.392387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:102136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.886 [2024-11-19 12:42:14.392407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:42.886 [2024-11-19 12:42:14.393014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:102144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.886 [2024-11-19 12:42:14.393041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:42.886 [2024-11-19 12:42:14.393073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:102656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.886 [2024-11-19 12:42:14.393088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:42.886 [2024-11-19 12:42:14.393114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.886 [2024-11-19 12:42:14.393128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:42.886 [2024-11-19 12:42:14.393153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:102672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.886 [2024-11-19 12:42:14.393167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:42.886 [2024-11-19 12:42:14.393192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:102680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.886 [2024-11-19 12:42:14.393206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:42.886 [2024-11-19 12:42:14.393231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:102688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.886 [2024-11-19 12:42:14.393244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:42.886 [2024-11-19 12:42:14.393269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:102696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.886 [2024-11-19 12:42:14.393283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:42.886 [2024-11-19 12:42:14.393308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:102704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.886 [2024-11-19 12:42:14.393336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:42.886 [2024-11-19 12:42:14.393379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:102712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.886 [2024-11-19 12:42:14.393401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:42.886 [2024-11-19 12:42:14.393429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:102720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.886 [2024-11-19 12:42:14.393445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:42.886 9514.59 IOPS, 37.17 MiB/s [2024-11-19T12:42:48.146Z] 9100.91 IOPS, 35.55 MiB/s [2024-11-19T12:42:48.146Z] 8721.71 IOPS, 34.07 MiB/s [2024-11-19T12:42:48.146Z] 8372.84 IOPS, 32.71 MiB/s [2024-11-19T12:42:48.146Z] 8050.81 IOPS, 31.45 MiB/s [2024-11-19T12:42:48.146Z] 7752.63 IOPS, 30.28 MiB/s [2024-11-19T12:42:48.146Z] 7475.75 IOPS, 29.20 MiB/s [2024-11-19T12:42:48.146Z] 7229.86 IOPS, 28.24 MiB/s [2024-11-19T12:42:48.146Z] 7315.27 IOPS, 28.58 MiB/s [2024-11-19T12:42:48.146Z] 7400.58 IOPS, 28.91 MiB/s [2024-11-19T12:42:48.146Z] 7481.31 IOPS, 29.22 MiB/s [2024-11-19T12:42:48.146Z] 7555.94 IOPS, 29.52 MiB/s [2024-11-19T12:42:48.146Z] 7628.53 IOPS, 29.80 MiB/s [2024-11-19T12:42:48.146Z] 7694.46 IOPS, 30.06 MiB/s [2024-11-19T12:42:48.146Z] [2024-11-19 12:42:27.702290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.886 [2024-11-19 12:42:27.702342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:42.886 [2024-11-19 12:42:27.702410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.886 [2024-11-19 12:42:27.702430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:42.887 [2024-11-19 12:42:27.702451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.887 [2024-11-19 12:42:27.702465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:42.887 [2024-11-19 12:42:27.702483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.887 [2024-11-19 12:42:27.702497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:42.887 [2024-11-19 12:42:27.702516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.887 [2024-11-19 12:42:27.702529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:42.887 [2024-11-19 12:42:27.702548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.887 [2024-11-19 12:42:27.702561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:42.887 [2024-11-19 12:42:27.702580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.887 [2024-11-19 12:42:27.702594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:42.887 [2024-11-19 12:42:27.702612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.887 [2024-11-19 12:42:27.702625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:42.887 [2024-11-19 12:42:27.702644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.887 [2024-11-19 12:42:27.702722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:42.887 [2024-11-19 12:42:27.702748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.887 [2024-11-19 12:42:27.702763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:42.887 [2024-11-19 12:42:27.702783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.887 [2024-11-19 12:42:27.702797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:42.887 [2024-11-19 12:42:27.702817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.887 [2024-11-19 12:42:27.702831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:42.887 [2024-11-19 12:42:27.702850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.887 [2024-11-19 12:42:27.702864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:42.887 [2024-11-19 12:42:27.702884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.887 [2024-11-19 12:42:27.702898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:42.887 [2024-11-19 12:42:27.702917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.887 [2024-11-19 12:42:27.702931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:42.887 [2024-11-19 12:42:27.702951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.887 [2024-11-19 12:42:27.702966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:42.887 [2024-11-19 12:42:27.703027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.887 [2024-11-19 12:42:27.703062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.887 [2024-11-19 12:42:27.703079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.887 [2024-11-19 12:42:27.703092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.887 [2024-11-19 12:42:27.703105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.887 [2024-11-19 12:42:27.703117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.887 [2024-11-19 12:42:27.703130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.887 [2024-11-19 12:42:27.703142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.887 [2024-11-19 12:42:27.703156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.887 [2024-11-19 12:42:27.703168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.887 [2024-11-19 12:42:27.703191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.887 [2024-11-19 12:42:27.703204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.887 [2024-11-19 12:42:27.703217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.887 [2024-11-19 12:42:27.703229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.887 [2024-11-19 12:42:27.703243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.887 [2024-11-19 12:42:27.703255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.887 [2024-11-19 12:42:27.703268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.887 [2024-11-19 12:42:27.703280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.887 [2024-11-19 12:42:27.703293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.887 [2024-11-19 12:42:27.703305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.887 [2024-11-19 12:42:27.703344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.887 [2024-11-19 12:42:27.703377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.887 [2024-11-19 12:42:27.703392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.887 [2024-11-19 12:42:27.703406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.887 [2024-11-19 12:42:27.703421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.887 [2024-11-19 12:42:27.703435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.887 [2024-11-19 12:42:27.703449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.887 [2024-11-19 12:42:27.703463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.887 [2024-11-19 12:42:27.703478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.887 [2024-11-19 12:42:27.703509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.887 [2024-11-19 12:42:27.703524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.887 [2024-11-19 12:42:27.703538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.887 [2024-11-19 12:42:27.703553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.887 [2024-11-19 12:42:27.703568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.887 [2024-11-19 12:42:27.703594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.887 [2024-11-19 12:42:27.703619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.887 [2024-11-19 12:42:27.703635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.887 [2024-11-19 12:42:27.703649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.887 [2024-11-19 12:42:27.703693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.887 [2024-11-19 12:42:27.703729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.887 [2024-11-19 12:42:27.703759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.887 [2024-11-19 12:42:27.703773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.887 [2024-11-19 12:42:27.703787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.887 [2024-11-19 12:42:27.703800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.887 [2024-11-19 12:42:27.703814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.887 [2024-11-19 12:42:27.703827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.887 [2024-11-19 12:42:27.703842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.887 [2024-11-19 12:42:27.703855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.887 [2024-11-19 12:42:27.703870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.887 [2024-11-19 12:42:27.703883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.887 [2024-11-19 12:42:27.703897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.888 [2024-11-19 12:42:27.703911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.888 [2024-11-19 12:42:27.703925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.888 [2024-11-19 12:42:27.703938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.888 [2024-11-19 12:42:27.703952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.888 [2024-11-19 12:42:27.703965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.888 [2024-11-19 12:42:27.703980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.888 [2024-11-19 12:42:27.703993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.888 [2024-11-19 12:42:27.704007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.888 [2024-11-19 12:42:27.704036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.888 [2024-11-19 12:42:27.704072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.888 [2024-11-19 12:42:27.704085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.888 [2024-11-19 12:42:27.704099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.888 [2024-11-19 12:42:27.704111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.888 [2024-11-19 12:42:27.704125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.888 [2024-11-19 12:42:27.704137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.888 [2024-11-19 12:42:27.704152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.888 [2024-11-19 12:42:27.704165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.888 [2024-11-19 12:42:27.704178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.888 [2024-11-19 12:42:27.704191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.888 [2024-11-19 12:42:27.704204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.888 [2024-11-19 12:42:27.704217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.888 [2024-11-19 12:42:27.704230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.888 [2024-11-19 12:42:27.704243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.888 [2024-11-19 12:42:27.704256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.888 [2024-11-19 12:42:27.704268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.888 [2024-11-19 12:42:27.704282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.888 [2024-11-19 12:42:27.704294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.888 [2024-11-19 12:42:27.704308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.888 [2024-11-19 12:42:27.704320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.888 [2024-11-19 12:42:27.704333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.888 [2024-11-19 12:42:27.704345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.888 [2024-11-19 12:42:27.704359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:78296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.888 [2024-11-19 12:42:27.704371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.888 [2024-11-19 12:42:27.704385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.888 [2024-11-19 12:42:27.704402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.888 [2024-11-19 12:42:27.704417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.888 [2024-11-19 12:42:27.704429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.888 [2024-11-19 12:42:27.704443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.888 [2024-11-19 12:42:27.704455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.888 [2024-11-19 12:42:27.704470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.888 [2024-11-19 12:42:27.704482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.888 [2024-11-19 12:42:27.704496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.888 [2024-11-19 12:42:27.704508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.888 [2024-11-19 12:42:27.704522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.888 [2024-11-19 12:42:27.704534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.888 [2024-11-19 12:42:27.704548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.888 [2024-11-19 12:42:27.704560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.888 [2024-11-19 12:42:27.704574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.888 [2024-11-19 12:42:27.704587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.888 [2024-11-19 12:42:27.704600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.888 [2024-11-19 12:42:27.704612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.888 [2024-11-19 12:42:27.704626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.888 [2024-11-19 12:42:27.704638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.888 [2024-11-19 12:42:27.704651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.888 [2024-11-19 12:42:27.704663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.888 [2024-11-19 12:42:27.704693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.888 [2024-11-19 12:42:27.704723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.888 [2024-11-19 12:42:27.704751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.888 [2024-11-19 12:42:27.704766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.888 [2024-11-19 12:42:27.704780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.888 [2024-11-19 12:42:27.704800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.888 [2024-11-19 12:42:27.704816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.888 [2024-11-19 12:42:27.704829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.888 [2024-11-19 12:42:27.704843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.888 [2024-11-19 12:42:27.704856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.888 [2024-11-19 12:42:27.704871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.888 [2024-11-19 12:42:27.704884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.888 [2024-11-19 12:42:27.704899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.888 [2024-11-19 12:42:27.704911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.888 [2024-11-19 12:42:27.704926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:78384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.888 [2024-11-19 12:42:27.704939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.888 [2024-11-19 12:42:27.704954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.888 [2024-11-19 12:42:27.704967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.888 [2024-11-19 12:42:27.704982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.888 [2024-11-19 12:42:27.704995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.888 [2024-11-19 12:42:27.705010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:78408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.888 [2024-11-19 12:42:27.705038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.889 [2024-11-19 12:42:27.705052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.889 [2024-11-19 12:42:27.705080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.889 [2024-11-19 12:42:27.705093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.889 [2024-11-19 12:42:27.705106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.889 [2024-11-19 12:42:27.705119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.889 [2024-11-19 12:42:27.705132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.889 [2024-11-19 12:42:27.705145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.889 [2024-11-19 12:42:27.705157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.889 [2024-11-19 12:42:27.705176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.889 [2024-11-19 12:42:27.705190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.889 [2024-11-19 12:42:27.705204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.889 [2024-11-19 12:42:27.705216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.889 [2024-11-19 12:42:27.705230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.889 [2024-11-19 12:42:27.705242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.889 [2024-11-19 12:42:27.705256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.889 [2024-11-19 12:42:27.705268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.889 [2024-11-19 12:42:27.705282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.889 [2024-11-19 12:42:27.705294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.889 [2024-11-19 12:42:27.705307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.889 [2024-11-19 12:42:27.705319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.889 [2024-11-19 12:42:27.705333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.889 [2024-11-19 12:42:27.705345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.889 [2024-11-19 12:42:27.705359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.889 [2024-11-19 12:42:27.705371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.889 [2024-11-19 12:42:27.705384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.889 [2024-11-19 12:42:27.705397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.889 [2024-11-19 12:42:27.705410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.889 [2024-11-19 12:42:27.705427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.889 [2024-11-19 12:42:27.705441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.889 [2024-11-19 12:42:27.705456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.889 [2024-11-19 12:42:27.705470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.889 [2024-11-19 12:42:27.705483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.889 [2024-11-19 12:42:27.705496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.889 [2024-11-19 12:42:27.705514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.889 [2024-11-19 12:42:27.705528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.889 [2024-11-19 12:42:27.705541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.889 [2024-11-19 12:42:27.705554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.889 [2024-11-19 12:42:27.705566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.889 [2024-11-19 12:42:27.705580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.889 [2024-11-19 12:42:27.705592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.889 [2024-11-19 12:42:27.705606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.889 [2024-11-19 12:42:27.705618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.889 [2024-11-19 12:42:27.705632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.889 [2024-11-19 12:42:27.705644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.889 [2024-11-19 12:42:27.705658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.889 [2024-11-19 12:42:27.705670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.889 [2024-11-19 12:42:27.705715] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc748f0 is same with the state(6) to be set 00:22:42.889 [2024-11-19 12:42:27.705744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.889 [2024-11-19 12:42:27.705757] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.889 [2024-11-19 12:42:27.705767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78536 len:8 PRP1 0x0 PRP2 0x0 00:22:42.889 [2024-11-19 12:42:27.705780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.889 [2024-11-19 12:42:27.705793] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.889 [2024-11-19 12:42:27.705803] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.889 [2024-11-19 12:42:27.705813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78928 len:8 PRP1 0x0 PRP2 0x0 00:22:42.889 [2024-11-19 12:42:27.705826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.889 [2024-11-19 12:42:27.705838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.889 [2024-11-19 12:42:27.705847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.889 [2024-11-19 12:42:27.705857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78936 len:8 PRP1 0x0 PRP2 0x0 00:22:42.889 [2024-11-19 12:42:27.705869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.889 [2024-11-19 12:42:27.705884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.889 [2024-11-19 12:42:27.705894] nvme_qpair.c: 55 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:42.889 8:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.889 [2024-11-19 12:42:27.705912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78944 len:8 PRP1 0x0 PRP2 0x0 00:22:42.889 [2024-11-19 12:42:27.705926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.889 [2024-11-19 12:42:27.705939] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.889 [2024-11-19 12:42:27.705948] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.889 [2024-11-19 12:42:27.705958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78952 len:8 PRP1 0x0 PRP2 0x0 00:22:42.889 [2024-11-19 12:42:27.705970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.889 [2024-11-19 12:42:27.705983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.889 [2024-11-19 12:42:27.705992] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.889 [2024-11-19 12:42:27.706002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78960 len:8 PRP1 0x0 PRP2 0x0 00:22:42.889 [2024-11-19 12:42:27.706014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.889 [2024-11-19 12:42:27.706026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.889 [2024-11-19 12:42:27.706035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.889 [2024-11-19 12:42:27.706060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78968 len:8 PRP1 0x0 PRP2 0x0 00:22:42.889 [2024-11-19 12:42:27.706071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.889 [2024-11-19 12:42:27.706083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.889 [2024-11-19 12:42:27.706092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.889 [2024-11-19 12:42:27.706101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78976 len:8 PRP1 0x0 PRP2 0x0 00:22:42.890 [2024-11-19 12:42:27.706128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.890 [2024-11-19 12:42:27.706140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.890 [2024-11-19 12:42:27.706149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.890 [2024-11-19 12:42:27.706158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78984 len:8 PRP1 0x0 PRP2 0x0 00:22:42.890 [2024-11-19 12:42:27.706169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.890 [2024-11-19 12:42:27.706181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.890 [2024-11-19 12:42:27.706189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.890 [2024-11-19 12:42:27.706198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78992 len:8 PRP1 0x0 PRP2 0x0 00:22:42.890 [2024-11-19 12:42:27.706210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.890 [2024-11-19 12:42:27.706221] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.890 [2024-11-19 12:42:27.706230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.890 [2024-11-19 12:42:27.706239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79000 len:8 PRP1 0x0 PRP2 0x0 00:22:42.890 [2024-11-19 12:42:27.706250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.890 [2024-11-19 12:42:27.706268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.890 [2024-11-19 12:42:27.706278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.890 [2024-11-19 12:42:27.706305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79008 len:8 PRP1 0x0 PRP2 0x0 00:22:42.890 [2024-11-19 12:42:27.706317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.890 [2024-11-19 12:42:27.706330] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.890 [2024-11-19 12:42:27.706339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.890 [2024-11-19 12:42:27.706348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79016 len:8 PRP1 0x0 PRP2 0x0 00:22:42.890 [2024-11-19 12:42:27.706359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.890 [2024-11-19 12:42:27.706372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.890 [2024-11-19 12:42:27.706381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.890 [2024-11-19 12:42:27.706390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79024 len:8 PRP1 0x0 PRP2 0x0 00:22:42.890 [2024-11-19 12:42:27.706401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.890 [2024-11-19 12:42:27.706413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.890 [2024-11-19 12:42:27.706423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.890 [2024-11-19 12:42:27.706432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79032 len:8 PRP1 0x0 PRP2 0x0 00:22:42.890 [2024-11-19 12:42:27.706443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.890 [2024-11-19 12:42:27.706455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.890 [2024-11-19 12:42:27.706464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.890 [2024-11-19 12:42:27.706473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79040 len:8 PRP1 0x0 PRP2 0x0 00:22:42.890 [2024-11-19 12:42:27.706485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.890 [2024-11-19 12:42:27.706497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.890 [2024-11-19 12:42:27.706506] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.890 [2024-11-19 12:42:27.706516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79048 len:8 PRP1 0x0 PRP2 0x0 00:22:42.890 [2024-11-19 12:42:27.706527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.890 [2024-11-19 12:42:27.706540] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.890 [2024-11-19 12:42:27.706549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.890 [2024-11-19 12:42:27.706558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79056 len:8 PRP1 0x0 PRP2 0x0 00:22:42.890 [2024-11-19 12:42:27.706570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.890 [2024-11-19 12:42:27.706582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.890 [2024-11-19 12:42:27.706591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.890 [2024-11-19 12:42:27.706600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79064 len:8 PRP1 0x0 PRP2 0x0 00:22:42.890 [2024-11-19 12:42:27.706611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.890 [2024-11-19 12:42:27.706646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.890 [2024-11-19 12:42:27.706655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.890 [2024-11-19 12:42:27.706666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79072 len:8 PRP1 0x0 PRP2 0x0 00:22:42.890 [2024-11-19 12:42:27.706727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.890 [2024-11-19 12:42:27.706742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.890 [2024-11-19 12:42:27.706752] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.890 [2024-11-19 12:42:27.706761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79080 len:8 PRP1 0x0 PRP2 0x0 00:22:42.890 [2024-11-19 12:42:27.706773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.890 [2024-11-19 12:42:27.706786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.890 [2024-11-19 12:42:27.706796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.890 [2024-11-19 12:42:27.706805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79088 len:8 PRP1 0x0 PRP2 0x0 00:22:42.890 [2024-11-19 12:42:27.706817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.890 [2024-11-19 12:42:27.706830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.890 [2024-11-19 12:42:27.706839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.890 [2024-11-19 12:42:27.706849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79096 len:8 PRP1 0x0 PRP2 0x0 00:22:42.890 [2024-11-19 12:42:27.706861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.890 [2024-11-19 12:42:27.706873] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.890 [2024-11-19 12:42:27.706882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.890 [2024-11-19 12:42:27.706892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79104 len:8 PRP1 0x0 PRP2 0x0 00:22:42.890 [2024-11-19 12:42:27.706904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.890 [2024-11-19 12:42:27.706916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.890 [2024-11-19 12:42:27.706925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.890 [2024-11-19 12:42:27.706935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79112 len:8 PRP1 0x0 PRP2 0x0 00:22:42.890 [2024-11-19 12:42:27.706947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.890 [2024-11-19 12:42:27.706991] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc748f0 was disconnected and freed. reset controller. 00:22:42.890 [2024-11-19 12:42:27.707136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.890 [2024-11-19 12:42:27.707162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.890 [2024-11-19 12:42:27.707176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.890 [2024-11-19 12:42:27.707189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.890 [2024-11-19 12:42:27.707211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.890 [2024-11-19 12:42:27.707225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.890 [2024-11-19 12:42:27.707237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.890 [2024-11-19 12:42:27.707250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.890 [2024-11-19 12:42:27.707263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.890 [2024-11-19 12:42:27.707275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.890 [2024-11-19 12:42:27.707294] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc41370 is same with the state(6) to be set 00:22:42.890 [2024-11-19 12:42:27.708405] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:42.891 [2024-11-19 12:42:27.708443] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc41370 (9): Bad file descriptor 00:22:42.891 [2024-11-19 12:42:27.708838] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.891 [2024-11-19 12:42:27.708871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc41370 with addr=10.0.0.3, port=4421 00:22:42.891 [2024-11-19 12:42:27.708888] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc41370 is same with the state(6) to be set 00:22:42.891 [2024-11-19 12:42:27.708954] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc41370 (9): Bad file descriptor 00:22:42.891 [2024-11-19 12:42:27.708990] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:42.891 [2024-11-19 12:42:27.709008] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:42.891 [2024-11-19 12:42:27.709021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:42.891 [2024-11-19 12:42:27.709067] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:42.891 [2024-11-19 12:42:27.709085] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:42.891 7750.08 IOPS, 30.27 MiB/s [2024-11-19T12:42:48.151Z] 7798.78 IOPS, 30.46 MiB/s [2024-11-19T12:42:48.151Z] 7850.82 IOPS, 30.67 MiB/s [2024-11-19T12:42:48.151Z] 7903.26 IOPS, 30.87 MiB/s [2024-11-19T12:42:48.151Z] 7953.07 IOPS, 31.07 MiB/s [2024-11-19T12:42:48.151Z] 7999.49 IOPS, 31.25 MiB/s [2024-11-19T12:42:48.151Z] 8043.69 IOPS, 31.42 MiB/s [2024-11-19T12:42:48.151Z] 8081.19 IOPS, 31.57 MiB/s [2024-11-19T12:42:48.151Z] 8119.52 IOPS, 31.72 MiB/s [2024-11-19T12:42:48.151Z] 8158.11 IOPS, 31.87 MiB/s [2024-11-19T12:42:48.151Z] [2024-11-19 12:42:37.766188] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:42.891 8195.39 IOPS, 32.01 MiB/s [2024-11-19T12:42:48.151Z] 8231.83 IOPS, 32.16 MiB/s [2024-11-19T12:42:48.151Z] 8267.00 IOPS, 32.29 MiB/s [2024-11-19T12:42:48.151Z] 8301.55 IOPS, 32.43 MiB/s [2024-11-19T12:42:48.151Z] 8327.20 IOPS, 32.53 MiB/s [2024-11-19T12:42:48.151Z] 8359.22 IOPS, 32.65 MiB/s [2024-11-19T12:42:48.151Z] 8389.23 IOPS, 32.77 MiB/s [2024-11-19T12:42:48.151Z] 8418.11 IOPS, 32.88 MiB/s [2024-11-19T12:42:48.151Z] 8447.11 IOPS, 33.00 MiB/s [2024-11-19T12:42:48.151Z] 8473.31 IOPS, 33.10 MiB/s [2024-11-19T12:42:48.151Z] Received shutdown signal, test time was about 55.451698 seconds 00:22:42.891 00:22:42.891 Latency(us) 00:22:42.891 [2024-11-19T12:42:48.151Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:42.891 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:42.891 Verification LBA range: start 0x0 length 0x4000 00:22:42.891 Nvme0n1 : 55.45 8481.37 33.13 0.00 0.00 15062.18 366.78 7015926.69 00:22:42.891 [2024-11-19T12:42:48.151Z] =================================================================================================================== 00:22:42.891 [2024-11-19T12:42:48.151Z] Total : 8481.37 33.13 0.00 0.00 15062.18 366.78 7015926.69 00:22:42.891 [2024-11-19 12:42:47.878116] app.c:1032:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:22:43.150 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:22:43.150 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:43.150 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:22:43.150 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:22:43.150 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:22:43.150 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:43.150 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:22:43.150 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:43.150 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:43.150 rmmod nvme_tcp 00:22:43.150 rmmod nvme_fabrics 00:22:43.150 rmmod nvme_keyring 00:22:43.150 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:43.150 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:22:43.150 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:22:43.150 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@513 -- # '[' -n 96078 ']' 00:22:43.150 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@514 -- # killprocess 96078 00:22:43.150 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 96078 ']' 00:22:43.150 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 96078 00:22:43.150 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:22:43.150 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:43.150 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96078 00:22:43.150 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:43.150 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:43.150 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96078' 00:22:43.150 killing process with pid 96078 00:22:43.150 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 96078 00:22:43.150 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 96078 00:22:43.426 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:22:43.426 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:22:43.426 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:22:43.426 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:22:43.426 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@787 -- # iptables-save 00:22:43.426 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:22:43.426 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:22:43.426 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:43.426 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:43.426 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:43.426 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:43.426 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:43.426 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:43.426 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:43.426 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:43.426 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:43.426 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:43.426 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:43.426 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:43.695 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:43.695 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:43.695 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:43.695 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:43.695 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.695 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:43.695 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.695 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:22:43.695 00:22:43.695 real 1m0.262s 00:22:43.695 user 2m46.740s 00:22:43.695 sys 0m18.205s 00:22:43.695 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:43.695 ************************************ 00:22:43.695 END TEST nvmf_host_multipath 00:22:43.695 ************************************ 00:22:43.695 12:42:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:43.695 12:42:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:43.695 12:42:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:43.695 12:42:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:43.695 12:42:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.695 ************************************ 00:22:43.695 START TEST nvmf_timeout 00:22:43.695 ************************************ 00:22:43.695 12:42:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:43.695 * Looking for test storage... 00:22:43.695 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:43.695 12:42:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:43.695 12:42:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1681 -- # lcov --version 00:22:43.695 12:42:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:43.955 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:43.955 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:43.955 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:43.955 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:43.955 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:22:43.955 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:22:43.955 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:22:43.955 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:22:43.955 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:22:43.955 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:22:43.955 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:22:43.955 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:43.955 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:22:43.955 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:22:43.955 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:43.955 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:43.955 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:22:43.955 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:22:43.955 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:43.955 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:22:43.955 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:22:43.955 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:22:43.955 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:22:43.955 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:43.955 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:22:43.955 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:22:43.955 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:43.955 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:43.955 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:22:43.955 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:43.955 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:43.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.955 --rc genhtml_branch_coverage=1 00:22:43.955 --rc genhtml_function_coverage=1 00:22:43.955 --rc genhtml_legend=1 00:22:43.955 --rc geninfo_all_blocks=1 00:22:43.955 --rc geninfo_unexecuted_blocks=1 00:22:43.955 00:22:43.955 ' 00:22:43.955 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:43.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.955 --rc genhtml_branch_coverage=1 00:22:43.955 --rc genhtml_function_coverage=1 00:22:43.955 --rc genhtml_legend=1 00:22:43.955 --rc geninfo_all_blocks=1 00:22:43.955 --rc geninfo_unexecuted_blocks=1 00:22:43.955 00:22:43.955 ' 00:22:43.955 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:43.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.955 --rc genhtml_branch_coverage=1 00:22:43.955 --rc genhtml_function_coverage=1 00:22:43.955 --rc genhtml_legend=1 00:22:43.955 --rc geninfo_all_blocks=1 00:22:43.955 --rc geninfo_unexecuted_blocks=1 00:22:43.955 00:22:43.955 ' 00:22:43.955 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:43.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.955 --rc genhtml_branch_coverage=1 00:22:43.955 --rc genhtml_function_coverage=1 00:22:43.955 --rc genhtml_legend=1 00:22:43.955 --rc geninfo_all_blocks=1 00:22:43.956 --rc geninfo_unexecuted_blocks=1 00:22:43.956 00:22:43.956 ' 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:43.956 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@472 -- # prepare_net_devs 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@434 -- # local -g is_hw=no 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@436 -- # remove_spdk_ns 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@456 -- # nvmf_veth_init 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:43.956 Cannot find device "nvmf_init_br" 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:43.956 Cannot find device "nvmf_init_br2" 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:43.956 Cannot find device "nvmf_tgt_br" 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:43.956 Cannot find device "nvmf_tgt_br2" 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:43.956 Cannot find device "nvmf_init_br" 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:43.956 Cannot find device "nvmf_init_br2" 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:43.956 Cannot find device "nvmf_tgt_br" 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:43.956 Cannot find device "nvmf_tgt_br2" 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:43.956 Cannot find device "nvmf_br" 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:22:43.956 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:43.956 Cannot find device "nvmf_init_if" 00:22:43.957 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:22:43.957 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:43.957 Cannot find device "nvmf_init_if2" 00:22:43.957 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:22:43.957 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:43.957 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:43.957 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:22:43.957 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:43.957 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:43.957 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:22:43.957 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:43.957 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:43.957 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:43.957 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:44.216 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:44.216 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:22:44.216 00:22:44.216 --- 10.0.0.3 ping statistics --- 00:22:44.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.216 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:44.216 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:44.216 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:22:44.216 00:22:44.216 --- 10.0.0.4 ping statistics --- 00:22:44.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.216 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:44.216 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:44.216 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:22:44.216 00:22:44.216 --- 10.0.0.1 ping statistics --- 00:22:44.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.216 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:44.216 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:44.216 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:22:44.216 00:22:44.216 --- 10.0.0.2 ping statistics --- 00:22:44.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.216 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@457 -- # return 0 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@505 -- # nvmfpid=97280 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@506 -- # waitforlisten 97280 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 97280 ']' 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:44.216 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:44.476 [2024-11-19 12:42:49.514340] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:44.476 [2024-11-19 12:42:49.514428] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:44.476 [2024-11-19 12:42:49.653412] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:44.476 [2024-11-19 12:42:49.688381] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:44.476 [2024-11-19 12:42:49.688447] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:44.476 [2024-11-19 12:42:49.688457] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:44.476 [2024-11-19 12:42:49.688465] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:44.476 [2024-11-19 12:42:49.688471] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:44.476 [2024-11-19 12:42:49.688607] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:44.476 [2024-11-19 12:42:49.688617] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.476 [2024-11-19 12:42:49.717883] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:44.735 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:44.735 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:22:44.735 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:44.735 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:44.735 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:44.735 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:44.735 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:44.735 12:42:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:44.994 [2024-11-19 12:42:50.002512] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:44.994 12:42:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:45.254 Malloc0 00:22:45.254 12:42:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:45.513 12:42:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:45.773 12:42:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:46.032 [2024-11-19 12:42:51.123096] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:46.032 12:42:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:46.032 12:42:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=97326 00:22:46.032 12:42:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 97326 /var/tmp/bdevperf.sock 00:22:46.032 12:42:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 97326 ']' 00:22:46.032 12:42:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:46.032 12:42:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:46.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:46.032 12:42:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:46.032 12:42:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:46.032 12:42:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:46.032 [2024-11-19 12:42:51.184423] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:46.032 [2024-11-19 12:42:51.184520] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97326 ] 00:22:46.291 [2024-11-19 12:42:51.318157] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.291 [2024-11-19 12:42:51.359863] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:46.291 [2024-11-19 12:42:51.393422] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:46.860 12:42:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:46.860 12:42:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:22:46.860 12:42:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:47.119 12:42:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:47.378 NVMe0n1 00:22:47.378 12:42:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:47.378 12:42:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=97350 00:22:47.378 12:42:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:22:47.637 Running I/O for 10 seconds... 00:22:48.574 12:42:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:48.837 7956.00 IOPS, 31.08 MiB/s [2024-11-19T12:42:54.097Z] [2024-11-19 12:42:53.870941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.837 [2024-11-19 12:42:53.871565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.838 [2024-11-19 12:42:53.871572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.838 [2024-11-19 12:42:53.871580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.838 [2024-11-19 12:42:53.871588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.838 [2024-11-19 12:42:53.871596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.838 [2024-11-19 12:42:53.871604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.838 [2024-11-19 12:42:53.871611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.838 [2024-11-19 12:42:53.871619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.838 [2024-11-19 12:42:53.871627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.838 [2024-11-19 12:42:53.871635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.838 [2024-11-19 12:42:53.871642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.838 [2024-11-19 12:42:53.871649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.838 [2024-11-19 12:42:53.871656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.838 [2024-11-19 12:42:53.871664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.838 [2024-11-19 12:42:53.871672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.838 [2024-11-19 12:42:53.871691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.838 [2024-11-19 12:42:53.871717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.838 [2024-11-19 12:42:53.871724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.838 [2024-11-19 12:42:53.871732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.838 [2024-11-19 12:42:53.871739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.838 [2024-11-19 12:42:53.871746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.838 [2024-11-19 12:42:53.871754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.838 [2024-11-19 12:42:53.871761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.838 [2024-11-19 12:42:53.871769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.838 [2024-11-19 12:42:53.871776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.838 [2024-11-19 12:42:53.871784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.838 [2024-11-19 12:42:53.871792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.838 [2024-11-19 12:42:53.871799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.838 [2024-11-19 12:42:53.871806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.838 [2024-11-19 12:42:53.871813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.838 [2024-11-19 12:42:53.871821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.838 [2024-11-19 12:42:53.871828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.838 [2024-11-19 12:42:53.871836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.838 [2024-11-19 12:42:53.871843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.838 [2024-11-19 12:42:53.871850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.838 [2024-11-19 12:42:53.871857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.838 [2024-11-19 12:42:53.871864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.838 [2024-11-19 12:42:53.871871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.838 [2024-11-19 12:42:53.871879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.838 [2024-11-19 12:42:53.871886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.838 [2024-11-19 12:42:53.871893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.838 [2024-11-19 12:42:53.871900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.838 [2024-11-19 12:42:53.871908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.838 [2024-11-19 12:42:53.871915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.838 [2024-11-19 12:42:53.871922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.838 [2024-11-19 12:42:53.871929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155480 is same with the state(6) to be set 00:22:48.838 [2024-11-19 12:42:53.871982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:75040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.838 [2024-11-19 12:42:53.872009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.838 [2024-11-19 12:42:53.872029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:75048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.838 [2024-11-19 12:42:53.872039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.838 [2024-11-19 12:42:53.872050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:75056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.838 [2024-11-19 12:42:53.872058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.838 [2024-11-19 12:42:53.872069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:75064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.838 [2024-11-19 12:42:53.872077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.838 [2024-11-19 12:42:53.872087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:75072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.838 [2024-11-19 12:42:53.872096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.838 [2024-11-19 12:42:53.872106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:75080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.838 [2024-11-19 12:42:53.872114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.838 [2024-11-19 12:42:53.872124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:75088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.838 [2024-11-19 12:42:53.872133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.838 [2024-11-19 12:42:53.872143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:75096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.838 [2024-11-19 12:42:53.872151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.838 [2024-11-19 12:42:53.872161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:75104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.838 [2024-11-19 12:42:53.872170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.838 [2024-11-19 12:42:53.872180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:75112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.838 [2024-11-19 12:42:53.872188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.838 [2024-11-19 12:42:53.872198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:75120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.838 [2024-11-19 12:42:53.872206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.838 [2024-11-19 12:42:53.872216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:75128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.838 [2024-11-19 12:42:53.872225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.838 [2024-11-19 12:42:53.872235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:75136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.838 [2024-11-19 12:42:53.872243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.838 [2024-11-19 12:42:53.872254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:75144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.838 [2024-11-19 12:42:53.872262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.838 [2024-11-19 12:42:53.872272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:75152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.838 [2024-11-19 12:42:53.872280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.838 [2024-11-19 12:42:53.872290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:75160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.838 [2024-11-19 12:42:53.872298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.838 [2024-11-19 12:42:53.872308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.838 [2024-11-19 12:42:53.872317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.838 [2024-11-19 12:42:53.872328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:75176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.838 [2024-11-19 12:42:53.872336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.838 [2024-11-19 12:42:53.872346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:75184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.839 [2024-11-19 12:42:53.872354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.839 [2024-11-19 12:42:53.872364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.839 [2024-11-19 12:42:53.872372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.839 [2024-11-19 12:42:53.872382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:75200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.839 [2024-11-19 12:42:53.872391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.839 [2024-11-19 12:42:53.872401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:75208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.839 [2024-11-19 12:42:53.872409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.839 [2024-11-19 12:42:53.872419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.839 [2024-11-19 12:42:53.872427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.839 [2024-11-19 12:42:53.872437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:75224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.839 [2024-11-19 12:42:53.872445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.839 [2024-11-19 12:42:53.872455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:75232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.839 [2024-11-19 12:42:53.872463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.839 [2024-11-19 12:42:53.872474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:75240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.839 [2024-11-19 12:42:53.872483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.839 [2024-11-19 12:42:53.872493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:75248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.839 [2024-11-19 12:42:53.872501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.839 [2024-11-19 12:42:53.872512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:75256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.839 [2024-11-19 12:42:53.872521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.839 [2024-11-19 12:42:53.872531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:75264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.839 [2024-11-19 12:42:53.872539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.839 [2024-11-19 12:42:53.872550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:75272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.839 [2024-11-19 12:42:53.872558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.839 [2024-11-19 12:42:53.872568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:75280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.839 [2024-11-19 12:42:53.872576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.839 [2024-11-19 12:42:53.872586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:75288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.839 [2024-11-19 12:42:53.872594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.839 [2024-11-19 12:42:53.872604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:75296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.839 [2024-11-19 12:42:53.872612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.839 [2024-11-19 12:42:53.872622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:75304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.839 [2024-11-19 12:42:53.872630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.839 [2024-11-19 12:42:53.872640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:75312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.839 [2024-11-19 12:42:53.872648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.839 [2024-11-19 12:42:53.872658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:75320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.839 [2024-11-19 12:42:53.872666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.839 [2024-11-19 12:42:53.872676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:75328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.839 [2024-11-19 12:42:53.872698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.839 [2024-11-19 12:42:53.872708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.839 [2024-11-19 12:42:53.872717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.839 [2024-11-19 12:42:53.872727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.839 [2024-11-19 12:42:53.872736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.839 [2024-11-19 12:42:53.872746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:75352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.839 [2024-11-19 12:42:53.872754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.839 [2024-11-19 12:42:53.872764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:75360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.839 [2024-11-19 12:42:53.872773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.839 [2024-11-19 12:42:53.872783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:75368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.839 [2024-11-19 12:42:53.872792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.839 [2024-11-19 12:42:53.872802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:75376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.839 [2024-11-19 12:42:53.872810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.839 [2024-11-19 12:42:53.872820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:75384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.839 [2024-11-19 12:42:53.872828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.839 [2024-11-19 12:42:53.872839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.839 [2024-11-19 12:42:53.872847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.839 [2024-11-19 12:42:53.872857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:75400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.839 [2024-11-19 12:42:53.872865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.839 [2024-11-19 12:42:53.872875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.839 [2024-11-19 12:42:53.872883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.839 [2024-11-19 12:42:53.872893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:75416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.839 [2024-11-19 12:42:53.872901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.839 [2024-11-19 12:42:53.872911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:75424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.839 [2024-11-19 12:42:53.872921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.839 [2024-11-19 12:42:53.872931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:75432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.839 [2024-11-19 12:42:53.872939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.839 [2024-11-19 12:42:53.872949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:75440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.839 [2024-11-19 12:42:53.872957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.839 [2024-11-19 12:42:53.872967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:75448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.839 [2024-11-19 12:42:53.872976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.839 [2024-11-19 12:42:53.872986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.839 [2024-11-19 12:42:53.872994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.839 [2024-11-19 12:42:53.873004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.839 [2024-11-19 12:42:53.873012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.839 [2024-11-19 12:42:53.873022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:75472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.839 [2024-11-19 12:42:53.873030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.839 [2024-11-19 12:42:53.873040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:75480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.839 [2024-11-19 12:42:53.873048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.839 [2024-11-19 12:42:53.873058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:75488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.839 [2024-11-19 12:42:53.873073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.839 [2024-11-19 12:42:53.873083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:75496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.839 [2024-11-19 12:42:53.873092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.840 [2024-11-19 12:42:53.873102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:75504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.840 [2024-11-19 12:42:53.873110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.840 [2024-11-19 12:42:53.873121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:75512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.840 [2024-11-19 12:42:53.873129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.840 [2024-11-19 12:42:53.873139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:75520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.840 [2024-11-19 12:42:53.873147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.840 [2024-11-19 12:42:53.873157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:75528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.840 [2024-11-19 12:42:53.873165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.840 [2024-11-19 12:42:53.873175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:75536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.840 [2024-11-19 12:42:53.873183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.840 [2024-11-19 12:42:53.873193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:75544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.840 [2024-11-19 12:42:53.873201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.840 [2024-11-19 12:42:53.873211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:75552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.840 [2024-11-19 12:42:53.873222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.840 [2024-11-19 12:42:53.873233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:75560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.840 [2024-11-19 12:42:53.873241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.840 [2024-11-19 12:42:53.873251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:75568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.840 [2024-11-19 12:42:53.873260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.840 [2024-11-19 12:42:53.873270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:75576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.840 [2024-11-19 12:42:53.873278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.840 [2024-11-19 12:42:53.873289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:75584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.840 [2024-11-19 12:42:53.873297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.840 [2024-11-19 12:42:53.873307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:75592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.840 [2024-11-19 12:42:53.873315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.840 [2024-11-19 12:42:53.873325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:75600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.840 [2024-11-19 12:42:53.873334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.840 [2024-11-19 12:42:53.873344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:75608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.840 [2024-11-19 12:42:53.873352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.840 [2024-11-19 12:42:53.873362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:75616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.840 [2024-11-19 12:42:53.873374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.840 [2024-11-19 12:42:53.873385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:75624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.840 [2024-11-19 12:42:53.873393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.840 [2024-11-19 12:42:53.873403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:75632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.840 [2024-11-19 12:42:53.873411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.840 [2024-11-19 12:42:53.873421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:75640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.840 [2024-11-19 12:42:53.873429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.840 [2024-11-19 12:42:53.873439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:75648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.840 [2024-11-19 12:42:53.873448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.840 [2024-11-19 12:42:53.873458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:75656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.840 [2024-11-19 12:42:53.873466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.840 [2024-11-19 12:42:53.873476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:75664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.840 [2024-11-19 12:42:53.873485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.840 [2024-11-19 12:42:53.873494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:75672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.840 [2024-11-19 12:42:53.873503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.840 [2024-11-19 12:42:53.873513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:75680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.840 [2024-11-19 12:42:53.873523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.840 [2024-11-19 12:42:53.873533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:75688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.840 [2024-11-19 12:42:53.873542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.840 [2024-11-19 12:42:53.873552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:75696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.840 [2024-11-19 12:42:53.873560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.840 [2024-11-19 12:42:53.873570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:75704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.840 [2024-11-19 12:42:53.873579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.840 [2024-11-19 12:42:53.873589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:75712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.840 [2024-11-19 12:42:53.873597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.840 [2024-11-19 12:42:53.873608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:75720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.840 [2024-11-19 12:42:53.873616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.840 [2024-11-19 12:42:53.873626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:75728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.840 [2024-11-19 12:42:53.873634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.840 [2024-11-19 12:42:53.873645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:75736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.840 [2024-11-19 12:42:53.873653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.840 [2024-11-19 12:42:53.873671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:75744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.840 [2024-11-19 12:42:53.873683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.840 [2024-11-19 12:42:53.873694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:75752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.840 [2024-11-19 12:42:53.873702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.840 [2024-11-19 12:42:53.873712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:75760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.840 [2024-11-19 12:42:53.873721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.840 [2024-11-19 12:42:53.873731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:75768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.840 [2024-11-19 12:42:53.873739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.840 [2024-11-19 12:42:53.873749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:75776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.840 [2024-11-19 12:42:53.873757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.840 [2024-11-19 12:42:53.873768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:75784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.840 [2024-11-19 12:42:53.873776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.840 [2024-11-19 12:42:53.873786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.840 [2024-11-19 12:42:53.873794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.840 [2024-11-19 12:42:53.873804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:75800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.840 [2024-11-19 12:42:53.873812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.840 [2024-11-19 12:42:53.873822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.840 [2024-11-19 12:42:53.873832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.840 [2024-11-19 12:42:53.873842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:75816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.841 [2024-11-19 12:42:53.873850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.841 [2024-11-19 12:42:53.873861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.841 [2024-11-19 12:42:53.873869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.841 [2024-11-19 12:42:53.873879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:75832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.841 [2024-11-19 12:42:53.873887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.841 [2024-11-19 12:42:53.873897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:75840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.841 [2024-11-19 12:42:53.873905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.841 [2024-11-19 12:42:53.873915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:75848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.841 [2024-11-19 12:42:53.873924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.841 [2024-11-19 12:42:53.873933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:75856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.841 [2024-11-19 12:42:53.873942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.841 [2024-11-19 12:42:53.873952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.841 [2024-11-19 12:42:53.873960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.841 [2024-11-19 12:42:53.873970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.841 [2024-11-19 12:42:53.873979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.841 [2024-11-19 12:42:53.873989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.841 [2024-11-19 12:42:53.873998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.841 [2024-11-19 12:42:53.874008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.841 [2024-11-19 12:42:53.874016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.841 [2024-11-19 12:42:53.874026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.841 [2024-11-19 12:42:53.874034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.841 [2024-11-19 12:42:53.874045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.841 [2024-11-19 12:42:53.874053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.841 [2024-11-19 12:42:53.874064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:75912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.841 [2024-11-19 12:42:53.874072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.841 [2024-11-19 12:42:53.874082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:75920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.841 [2024-11-19 12:42:53.874090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.841 [2024-11-19 12:42:53.874100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.841 [2024-11-19 12:42:53.874108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.841 [2024-11-19 12:42:53.874119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.841 [2024-11-19 12:42:53.874128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.841 [2024-11-19 12:42:53.874139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.841 [2024-11-19 12:42:53.874147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.841 [2024-11-19 12:42:53.874157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.841 [2024-11-19 12:42:53.874165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.841 [2024-11-19 12:42:53.874175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.841 [2024-11-19 12:42:53.874184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.841 [2024-11-19 12:42:53.874194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.841 [2024-11-19 12:42:53.874202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.841 [2024-11-19 12:42:53.874212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.841 [2024-11-19 12:42:53.874221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.841 [2024-11-19 12:42:53.874230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.841 [2024-11-19 12:42:53.874239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.841 [2024-11-19 12:42:53.874249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.841 [2024-11-19 12:42:53.874257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.841 [2024-11-19 12:42:53.874267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.841 [2024-11-19 12:42:53.874277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.841 [2024-11-19 12:42:53.874287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.841 [2024-11-19 12:42:53.874295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.841 [2024-11-19 12:42:53.874305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.841 [2024-11-19 12:42:53.874313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.841 [2024-11-19 12:42:53.874323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.841 [2024-11-19 12:42:53.874331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.841 [2024-11-19 12:42:53.874341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.841 [2024-11-19 12:42:53.874350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.841 [2024-11-19 12:42:53.874360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.841 [2024-11-19 12:42:53.874368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.841 [2024-11-19 12:42:53.874378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.841 [2024-11-19 12:42:53.874386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.841 [2024-11-19 12:42:53.874396] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228ede0 is same with the state(6) to be set 00:22:48.841 [2024-11-19 12:42:53.874407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.841 [2024-11-19 12:42:53.874413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.841 [2024-11-19 12:42:53.874422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75936 len:8 PRP1 0x0 PRP2 0x0 00:22:48.841 [2024-11-19 12:42:53.874431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.841 [2024-11-19 12:42:53.874469] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x228ede0 was disconnected and freed. reset controller. 00:22:48.841 [2024-11-19 12:42:53.874717] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:48.841 [2024-11-19 12:42:53.874800] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226e500 (9): Bad file descriptor 00:22:48.841 [2024-11-19 12:42:53.874895] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.842 [2024-11-19 12:42:53.874916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226e500 with addr=10.0.0.3, port=4420 00:22:48.842 [2024-11-19 12:42:53.874926] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226e500 is same with the state(6) to be set 00:22:48.842 [2024-11-19 12:42:53.874942] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226e500 (9): Bad file descriptor 00:22:48.842 [2024-11-19 12:42:53.874957] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:48.842 [2024-11-19 12:42:53.874965] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:48.842 [2024-11-19 12:42:53.874975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:48.842 [2024-11-19 12:42:53.874993] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:48.842 [2024-11-19 12:42:53.875003] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:48.842 12:42:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:22:50.715 4690.00 IOPS, 18.32 MiB/s [2024-11-19T12:42:55.975Z] 3126.67 IOPS, 12.21 MiB/s [2024-11-19T12:42:55.975Z] [2024-11-19 12:42:55.875132] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.715 [2024-11-19 12:42:55.875210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226e500 with addr=10.0.0.3, port=4420 00:22:50.715 [2024-11-19 12:42:55.875224] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226e500 is same with the state(6) to be set 00:22:50.715 [2024-11-19 12:42:55.875245] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226e500 (9): Bad file descriptor 00:22:50.715 [2024-11-19 12:42:55.875262] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:50.715 [2024-11-19 12:42:55.875271] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:50.715 [2024-11-19 12:42:55.875281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:50.715 [2024-11-19 12:42:55.875303] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:50.715 [2024-11-19 12:42:55.875313] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:50.715 12:42:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:22:50.715 12:42:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:50.715 12:42:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:50.974 12:42:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:22:50.974 12:42:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:22:50.974 12:42:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:50.974 12:42:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:51.233 12:42:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:22:51.233 12:42:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:22:52.428 2345.00 IOPS, 9.16 MiB/s [2024-11-19T12:42:57.947Z] 1876.00 IOPS, 7.33 MiB/s [2024-11-19T12:42:57.947Z] [2024-11-19 12:42:57.875514] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:52.687 [2024-11-19 12:42:57.875578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x226e500 with addr=10.0.0.3, port=4420 00:22:52.687 [2024-11-19 12:42:57.875593] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226e500 is same with the state(6) to be set 00:22:52.687 [2024-11-19 12:42:57.875615] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x226e500 (9): Bad file descriptor 00:22:52.687 [2024-11-19 12:42:57.875632] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:52.687 [2024-11-19 12:42:57.875641] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:52.687 [2024-11-19 12:42:57.875651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:52.687 [2024-11-19 12:42:57.875674] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:52.687 [2024-11-19 12:42:57.875714] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:54.558 1563.33 IOPS, 6.11 MiB/s [2024-11-19T12:43:00.077Z] 1340.00 IOPS, 5.23 MiB/s [2024-11-19T12:43:00.077Z] [2024-11-19 12:42:59.875831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:54.817 [2024-11-19 12:42:59.875867] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:54.817 [2024-11-19 12:42:59.875894] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:54.817 [2024-11-19 12:42:59.875902] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:22:54.817 [2024-11-19 12:42:59.875926] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:55.753 1172.50 IOPS, 4.58 MiB/s 00:22:55.753 Latency(us) 00:22:55.753 [2024-11-19T12:43:01.013Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:55.753 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:55.753 Verification LBA range: start 0x0 length 0x4000 00:22:55.753 NVMe0n1 : 8.20 1144.54 4.47 15.62 0.00 110132.26 3410.85 7015926.69 00:22:55.753 [2024-11-19T12:43:01.013Z] =================================================================================================================== 00:22:55.753 [2024-11-19T12:43:01.013Z] Total : 1144.54 4.47 15.62 0.00 110132.26 3410.85 7015926.69 00:22:55.753 { 00:22:55.753 "results": [ 00:22:55.753 { 00:22:55.753 "job": "NVMe0n1", 00:22:55.753 "core_mask": "0x4", 00:22:55.753 "workload": "verify", 00:22:55.753 "status": "finished", 00:22:55.753 "verify_range": { 00:22:55.753 "start": 0, 00:22:55.753 "length": 16384 00:22:55.753 }, 00:22:55.753 "queue_depth": 128, 00:22:55.753 "io_size": 4096, 00:22:55.753 "runtime": 8.195451, 00:22:55.753 "iops": 1144.5373781137853, 00:22:55.753 "mibps": 4.470849133256974, 00:22:55.753 "io_failed": 128, 00:22:55.753 "io_timeout": 0, 00:22:55.753 "avg_latency_us": 110132.25858109917, 00:22:55.753 "min_latency_us": 3410.850909090909, 00:22:55.753 "max_latency_us": 7015926.69090909 00:22:55.753 } 00:22:55.753 ], 00:22:55.753 "core_count": 1 00:22:55.753 } 00:22:56.321 12:43:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:22:56.321 12:43:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:56.321 12:43:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:56.580 12:43:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:22:56.580 12:43:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:22:56.580 12:43:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:56.580 12:43:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:56.839 12:43:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:22:56.839 12:43:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 97350 00:22:56.839 12:43:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 97326 00:22:56.839 12:43:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 97326 ']' 00:22:56.839 12:43:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 97326 00:22:56.839 12:43:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:22:56.839 12:43:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:56.839 12:43:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97326 00:22:56.839 12:43:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:56.839 killing process with pid 97326 00:22:56.839 Received shutdown signal, test time was about 9.398962 seconds 00:22:56.839 00:22:56.839 Latency(us) 00:22:56.839 [2024-11-19T12:43:02.099Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.839 [2024-11-19T12:43:02.099Z] =================================================================================================================== 00:22:56.839 [2024-11-19T12:43:02.099Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:56.840 12:43:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:56.840 12:43:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97326' 00:22:56.840 12:43:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 97326 00:22:56.840 12:43:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 97326 00:22:57.098 12:43:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:57.357 [2024-11-19 12:43:02.406664] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:57.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:57.358 12:43:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=97467 00:22:57.358 12:43:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:57.358 12:43:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 97467 /var/tmp/bdevperf.sock 00:22:57.358 12:43:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 97467 ']' 00:22:57.358 12:43:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:57.358 12:43:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:57.358 12:43:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:57.358 12:43:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:57.358 12:43:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:57.358 [2024-11-19 12:43:02.470685] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:22:57.358 [2024-11-19 12:43:02.470966] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97467 ] 00:22:57.358 [2024-11-19 12:43:02.601214] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.617 [2024-11-19 12:43:02.635687] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:57.617 [2024-11-19 12:43:02.663489] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:57.617 12:43:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:57.617 12:43:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:22:57.617 12:43:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:57.876 12:43:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:22:58.134 NVMe0n1 00:22:58.134 12:43:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=97483 00:22:58.134 12:43:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:58.134 12:43:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:22:58.134 Running I/O for 10 seconds... 00:22:59.072 12:43:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:59.335 7956.00 IOPS, 31.08 MiB/s [2024-11-19T12:43:04.595Z] [2024-11-19 12:43:04.462516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.462564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.462593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.462600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.462607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.462615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.462622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.462628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.462635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.462642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.462649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.462656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.462663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.462669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.462676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.462721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.462746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.462754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.462761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.462768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.462776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.462784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.462791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.462798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.462806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.462829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.462837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.462844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.462860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.462868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.462875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.462882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.462890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.462898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.462912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.462919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.462927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.462935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.462942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.462949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.462957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.462964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.462988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.462995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.463003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.463010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.463018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.463025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.463032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.463040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.463047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.463055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.463065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.463072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.463079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.463087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.463110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.463134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.463157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.463180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.463189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.463197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.463204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.463228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.463236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.463244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.463253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.463261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.463269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.463281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.463289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.463297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.463305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.335 [2024-11-19 12:43:04.463313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.336 [2024-11-19 12:43:04.463321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.336 [2024-11-19 12:43:04.463329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.336 [2024-11-19 12:43:04.463362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.336 [2024-11-19 12:43:04.463372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.336 [2024-11-19 12:43:04.463380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.336 [2024-11-19 12:43:04.463388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.336 [2024-11-19 12:43:04.463396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.336 [2024-11-19 12:43:04.463404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.336 [2024-11-19 12:43:04.463412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.336 [2024-11-19 12:43:04.463420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.336 [2024-11-19 12:43:04.463428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.336 [2024-11-19 12:43:04.463436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.336 [2024-11-19 12:43:04.463444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.336 [2024-11-19 12:43:04.463452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.336 [2024-11-19 12:43:04.463460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.336 [2024-11-19 12:43:04.463468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.336 [2024-11-19 12:43:04.463475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.336 [2024-11-19 12:43:04.463483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.336 [2024-11-19 12:43:04.463492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.336 [2024-11-19 12:43:04.463501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.336 [2024-11-19 12:43:04.463510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.336 [2024-11-19 12:43:04.463518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.336 [2024-11-19 12:43:04.463526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.336 [2024-11-19 12:43:04.463534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.336 [2024-11-19 12:43:04.463543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.336 [2024-11-19 12:43:04.463551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.336 [2024-11-19 12:43:04.463559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.336 [2024-11-19 12:43:04.463567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.336 [2024-11-19 12:43:04.463575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.336 [2024-11-19 12:43:04.463583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.336 [2024-11-19 12:43:04.463591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.336 [2024-11-19 12:43:04.463603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.336 [2024-11-19 12:43:04.463611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.336 [2024-11-19 12:43:04.463619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.336 [2024-11-19 12:43:04.463627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201ca20 is same with the state(6) to be set 00:22:59.336 [2024-11-19 12:43:04.463711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:70944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-11-19 12:43:04.463755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-11-19 12:43:04.463792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:70952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-11-19 12:43:04.463802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-11-19 12:43:04.463813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:70960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-11-19 12:43:04.463822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-11-19 12:43:04.463833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:70968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-11-19 12:43:04.463842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-11-19 12:43:04.463853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:70976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-11-19 12:43:04.463862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-11-19 12:43:04.463873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:70984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-11-19 12:43:04.463882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-11-19 12:43:04.463893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:70992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-11-19 12:43:04.463903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-11-19 12:43:04.463913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:71000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-11-19 12:43:04.463922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-11-19 12:43:04.463934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:71008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-11-19 12:43:04.463943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-11-19 12:43:04.463954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:71016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-11-19 12:43:04.463963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-11-19 12:43:04.463974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:71024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-11-19 12:43:04.463983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-11-19 12:43:04.463994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-11-19 12:43:04.464003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-11-19 12:43:04.464014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:71040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-11-19 12:43:04.464023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-11-19 12:43:04.464033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:71048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-11-19 12:43:04.464042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-11-19 12:43:04.464053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:71056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-11-19 12:43:04.464062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-11-19 12:43:04.464072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:71064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-11-19 12:43:04.464081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-11-19 12:43:04.464092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:71072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-11-19 12:43:04.464103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-11-19 12:43:04.464114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:71080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-11-19 12:43:04.464123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-11-19 12:43:04.464134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:71088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-11-19 12:43:04.464142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-11-19 12:43:04.464154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:71096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-11-19 12:43:04.464163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-11-19 12:43:04.464174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:71104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-11-19 12:43:04.464183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-11-19 12:43:04.464194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:71112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-11-19 12:43:04.464217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.336 [2024-11-19 12:43:04.464228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:71120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.336 [2024-11-19 12:43:04.464237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-11-19 12:43:04.464247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:71128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-11-19 12:43:04.464256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-11-19 12:43:04.464267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:71136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-11-19 12:43:04.464276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-11-19 12:43:04.464286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:71144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-11-19 12:43:04.464295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-11-19 12:43:04.464305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:71152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-11-19 12:43:04.464314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-11-19 12:43:04.464324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:71160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-11-19 12:43:04.464332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-11-19 12:43:04.464343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:71168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-11-19 12:43:04.464351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-11-19 12:43:04.464361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:71176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-11-19 12:43:04.464370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-11-19 12:43:04.464380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:71184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-11-19 12:43:04.464389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-11-19 12:43:04.464399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:71192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-11-19 12:43:04.464408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-11-19 12:43:04.464418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:71200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-11-19 12:43:04.464428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-11-19 12:43:04.464438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:71208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-11-19 12:43:04.464447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-11-19 12:43:04.464458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:71216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-11-19 12:43:04.464466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-11-19 12:43:04.464477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:71224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-11-19 12:43:04.464485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-11-19 12:43:04.464496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-11-19 12:43:04.464504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-11-19 12:43:04.464515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:71240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-11-19 12:43:04.464524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-11-19 12:43:04.464534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:71248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-11-19 12:43:04.464543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-11-19 12:43:04.464553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:71256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-11-19 12:43:04.464562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-11-19 12:43:04.464573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:71264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-11-19 12:43:04.464581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-11-19 12:43:04.464592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:71272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-11-19 12:43:04.464600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-11-19 12:43:04.464611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:71280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-11-19 12:43:04.464620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-11-19 12:43:04.464630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:71288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-11-19 12:43:04.464638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-11-19 12:43:04.464649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:71296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-11-19 12:43:04.464657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-11-19 12:43:04.464668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:71304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-11-19 12:43:04.464677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-11-19 12:43:04.464687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:71312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-11-19 12:43:04.464695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-11-19 12:43:04.464716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:71320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-11-19 12:43:04.464727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-11-19 12:43:04.464738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:71328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-11-19 12:43:04.464747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-11-19 12:43:04.464758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:71336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-11-19 12:43:04.464766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-11-19 12:43:04.464777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:71344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-11-19 12:43:04.464786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-11-19 12:43:04.464796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:71352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-11-19 12:43:04.464820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-11-19 12:43:04.464831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:71360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-11-19 12:43:04.464840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-11-19 12:43:04.464851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:71368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-11-19 12:43:04.464860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-11-19 12:43:04.464870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:71376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-11-19 12:43:04.464879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-11-19 12:43:04.464890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:71384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-11-19 12:43:04.464899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-11-19 12:43:04.464910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:71392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-11-19 12:43:04.464919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-11-19 12:43:04.464930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:71400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-11-19 12:43:04.464938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-11-19 12:43:04.464950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:71408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-11-19 12:43:04.464959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-11-19 12:43:04.464970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-11-19 12:43:04.464979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-11-19 12:43:04.464990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:71424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-11-19 12:43:04.464999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-11-19 12:43:04.465009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:71432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.337 [2024-11-19 12:43:04.465018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.337 [2024-11-19 12:43:04.465029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:71440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-11-19 12:43:04.465038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-11-19 12:43:04.465048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-11-19 12:43:04.465057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-11-19 12:43:04.465068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:71456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-11-19 12:43:04.465077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-11-19 12:43:04.465088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:71464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-11-19 12:43:04.465098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-11-19 12:43:04.465109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:71472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-11-19 12:43:04.465118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-11-19 12:43:04.465129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:71480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-11-19 12:43:04.465138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-11-19 12:43:04.465149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:71488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-11-19 12:43:04.465158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-11-19 12:43:04.465169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:71496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-11-19 12:43:04.465178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-11-19 12:43:04.465188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:71504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-11-19 12:43:04.465197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-11-19 12:43:04.465208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:71512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-11-19 12:43:04.465217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-11-19 12:43:04.465228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:71520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-11-19 12:43:04.465237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-11-19 12:43:04.465262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:71528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-11-19 12:43:04.465271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-11-19 12:43:04.465282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:71536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-11-19 12:43:04.465291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-11-19 12:43:04.465302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:71544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-11-19 12:43:04.465310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-11-19 12:43:04.465320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:71552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-11-19 12:43:04.465329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-11-19 12:43:04.465340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-11-19 12:43:04.465348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-11-19 12:43:04.465359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:71568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-11-19 12:43:04.465367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-11-19 12:43:04.465378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-11-19 12:43:04.465386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-11-19 12:43:04.465397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:71584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-11-19 12:43:04.465406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-11-19 12:43:04.465416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-11-19 12:43:04.465432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-11-19 12:43:04.465443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:71600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-11-19 12:43:04.465452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-11-19 12:43:04.465463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:71608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-11-19 12:43:04.465471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-11-19 12:43:04.465482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:71616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-11-19 12:43:04.465491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-11-19 12:43:04.465501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:71624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-11-19 12:43:04.465510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-11-19 12:43:04.465521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:71632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-11-19 12:43:04.465529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-11-19 12:43:04.465540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:71640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-11-19 12:43:04.465549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-11-19 12:43:04.465559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:71648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-11-19 12:43:04.465568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-11-19 12:43:04.465578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:71656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-11-19 12:43:04.465587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-11-19 12:43:04.465597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-11-19 12:43:04.465606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-11-19 12:43:04.465616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:71672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-11-19 12:43:04.465625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-11-19 12:43:04.465635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-11-19 12:43:04.465644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-11-19 12:43:04.465654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:71688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-11-19 12:43:04.465663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-11-19 12:43:04.465673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:71696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-11-19 12:43:04.465682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-11-19 12:43:04.465692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:71704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-11-19 12:43:04.465711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-11-19 12:43:04.465722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:71712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-11-19 12:43:04.465734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-11-19 12:43:04.465745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-11-19 12:43:04.465756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-11-19 12:43:04.465782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:71728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-11-19 12:43:04.465791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-11-19 12:43:04.465802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:71736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-11-19 12:43:04.465811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-11-19 12:43:04.465821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:71744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-11-19 12:43:04.465830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-11-19 12:43:04.465841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:71752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.338 [2024-11-19 12:43:04.465850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.338 [2024-11-19 12:43:04.465861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:71760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.339 [2024-11-19 12:43:04.465870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-11-19 12:43:04.465880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:71768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.339 [2024-11-19 12:43:04.465889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-11-19 12:43:04.465900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:71776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.339 [2024-11-19 12:43:04.465909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-11-19 12:43:04.465919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:71784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.339 [2024-11-19 12:43:04.465929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-11-19 12:43:04.465939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:71792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.339 [2024-11-19 12:43:04.465948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-11-19 12:43:04.465959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:71800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.339 [2024-11-19 12:43:04.465968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-11-19 12:43:04.465979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:71808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.339 [2024-11-19 12:43:04.465988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-11-19 12:43:04.465999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:71816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.339 [2024-11-19 12:43:04.466007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-11-19 12:43:04.466019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:71824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.339 [2024-11-19 12:43:04.466028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-11-19 12:43:04.466039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:71848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.339 [2024-11-19 12:43:04.466048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-11-19 12:43:04.466058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.339 [2024-11-19 12:43:04.466069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-11-19 12:43:04.466080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:71864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.339 [2024-11-19 12:43:04.466091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-11-19 12:43:04.466102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.339 [2024-11-19 12:43:04.466111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-11-19 12:43:04.466121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:71880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.339 [2024-11-19 12:43:04.466130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-11-19 12:43:04.466141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.339 [2024-11-19 12:43:04.466150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-11-19 12:43:04.466160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:71896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.339 [2024-11-19 12:43:04.466169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-11-19 12:43:04.466180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:71904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.339 [2024-11-19 12:43:04.466189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-11-19 12:43:04.466215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.339 [2024-11-19 12:43:04.466223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-11-19 12:43:04.466233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.339 [2024-11-19 12:43:04.466242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-11-19 12:43:04.466252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:71928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.339 [2024-11-19 12:43:04.466261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-11-19 12:43:04.466271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.339 [2024-11-19 12:43:04.466279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-11-19 12:43:04.466290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.339 [2024-11-19 12:43:04.466298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-11-19 12:43:04.466309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:71952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.339 [2024-11-19 12:43:04.466317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-11-19 12:43:04.466327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.339 [2024-11-19 12:43:04.466336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-11-19 12:43:04.466347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:71832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.339 [2024-11-19 12:43:04.466355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-11-19 12:43:04.466365] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f62de0 is same with the state(6) to be set 00:22:59.339 [2024-11-19 12:43:04.466376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.339 [2024-11-19 12:43:04.466383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.339 [2024-11-19 12:43:04.466393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71840 len:8 PRP1 0x0 PRP2 0x0 00:22:59.339 [2024-11-19 12:43:04.466402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.339 [2024-11-19 12:43:04.466444] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f62de0 was disconnected and freed. reset controller. 00:22:59.339 [2024-11-19 12:43:04.466698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:59.339 [2024-11-19 12:43:04.466795] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42500 (9): Bad file descriptor 00:22:59.339 [2024-11-19 12:43:04.466895] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.339 [2024-11-19 12:43:04.466916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42500 with addr=10.0.0.3, port=4420 00:22:59.339 [2024-11-19 12:43:04.466927] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42500 is same with the state(6) to be set 00:22:59.339 [2024-11-19 12:43:04.466960] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42500 (9): Bad file descriptor 00:22:59.339 [2024-11-19 12:43:04.466990] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:59.339 [2024-11-19 12:43:04.467000] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:59.339 [2024-11-19 12:43:04.467010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:59.339 [2024-11-19 12:43:04.467030] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.339 [2024-11-19 12:43:04.467041] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:59.339 12:43:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:23:00.275 4434.00 IOPS, 17.32 MiB/s [2024-11-19T12:43:05.535Z] [2024-11-19 12:43:05.467146] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.275 [2024-11-19 12:43:05.467188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42500 with addr=10.0.0.3, port=4420 00:23:00.275 [2024-11-19 12:43:05.467201] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42500 is same with the state(6) to be set 00:23:00.276 [2024-11-19 12:43:05.467220] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42500 (9): Bad file descriptor 00:23:00.276 [2024-11-19 12:43:05.467237] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:00.276 [2024-11-19 12:43:05.467245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:00.276 [2024-11-19 12:43:05.467255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:00.276 [2024-11-19 12:43:05.467275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:00.276 [2024-11-19 12:43:05.467285] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:00.276 12:43:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:00.535 [2024-11-19 12:43:05.747257] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:00.535 12:43:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 97483 00:23:01.361 2956.00 IOPS, 11.55 MiB/s [2024-11-19T12:43:06.621Z] [2024-11-19 12:43:06.485941] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:03.235 2217.00 IOPS, 8.66 MiB/s [2024-11-19T12:43:09.430Z] 3646.80 IOPS, 14.25 MiB/s [2024-11-19T12:43:10.366Z] 4832.33 IOPS, 18.88 MiB/s [2024-11-19T12:43:11.743Z] 5680.29 IOPS, 22.19 MiB/s [2024-11-19T12:43:12.680Z] 6335.25 IOPS, 24.75 MiB/s [2024-11-19T12:43:13.617Z] 6837.56 IOPS, 26.71 MiB/s [2024-11-19T12:43:13.617Z] 7238.60 IOPS, 28.28 MiB/s 00:23:08.357 Latency(us) 00:23:08.357 [2024-11-19T12:43:13.617Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.357 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:08.357 Verification LBA range: start 0x0 length 0x4000 00:23:08.357 NVMe0n1 : 10.01 7240.40 28.28 0.00 0.00 17649.68 1980.97 3035150.89 00:23:08.357 [2024-11-19T12:43:13.617Z] =================================================================================================================== 00:23:08.357 [2024-11-19T12:43:13.617Z] Total : 7240.40 28.28 0.00 0.00 17649.68 1980.97 3035150.89 00:23:08.357 { 00:23:08.357 "results": [ 00:23:08.357 { 00:23:08.357 "job": "NVMe0n1", 00:23:08.357 "core_mask": "0x4", 00:23:08.357 "workload": "verify", 00:23:08.357 "status": "finished", 00:23:08.357 "verify_range": { 00:23:08.357 "start": 0, 00:23:08.357 "length": 16384 00:23:08.357 }, 00:23:08.357 "queue_depth": 128, 00:23:08.357 "io_size": 4096, 00:23:08.357 "runtime": 10.007456, 00:23:08.357 "iops": 7240.4015565994, 00:23:08.357 "mibps": 28.282818580466405, 00:23:08.357 "io_failed": 0, 00:23:08.357 "io_timeout": 0, 00:23:08.357 "avg_latency_us": 17649.675090020803, 00:23:08.357 "min_latency_us": 1980.9745454545455, 00:23:08.357 "max_latency_us": 3035150.8945454545 00:23:08.357 } 00:23:08.357 ], 00:23:08.357 "core_count": 1 00:23:08.357 } 00:23:08.357 12:43:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=97588 00:23:08.357 12:43:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:08.357 12:43:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:23:08.357 Running I/O for 10 seconds... 00:23:09.319 12:43:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:09.581 7844.00 IOPS, 30.64 MiB/s [2024-11-19T12:43:14.841Z] [2024-11-19 12:43:14.630054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.581 [2024-11-19 12:43:14.630130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.581 [2024-11-19 12:43:14.630161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.581 [2024-11-19 12:43:14.630177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.581 [2024-11-19 12:43:14.630186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.581 [2024-11-19 12:43:14.630194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.581 [2024-11-19 12:43:14.630203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.581 [2024-11-19 12:43:14.630211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.581 [2024-11-19 12:43:14.630219] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42500 is same with the state(6) to be set 00:23:09.581 [2024-11-19 12:43:14.630434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:71840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.581 [2024-11-19 12:43:14.630451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.581 [2024-11-19 12:43:14.630467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.581 [2024-11-19 12:43:14.630475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.581 [2024-11-19 12:43:14.630485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.581 [2024-11-19 12:43:14.630494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.581 [2024-11-19 12:43:14.630503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.581 [2024-11-19 12:43:14.630512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.581 [2024-11-19 12:43:14.630521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.581 [2024-11-19 12:43:14.630529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.581 [2024-11-19 12:43:14.630539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.581 [2024-11-19 12:43:14.630547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.581 [2024-11-19 12:43:14.630556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.581 [2024-11-19 12:43:14.630564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.581 [2024-11-19 12:43:14.630574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.581 [2024-11-19 12:43:14.630582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.581 [2024-11-19 12:43:14.630591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.581 [2024-11-19 12:43:14.630605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.581 [2024-11-19 12:43:14.630615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.581 [2024-11-19 12:43:14.630623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.581 [2024-11-19 12:43:14.630632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.581 [2024-11-19 12:43:14.630640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.581 [2024-11-19 12:43:14.630650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.581 [2024-11-19 12:43:14.630658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.581 [2024-11-19 12:43:14.630668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.581 [2024-11-19 12:43:14.630685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.581 [2024-11-19 12:43:14.630709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.581 [2024-11-19 12:43:14.630720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.581 [2024-11-19 12:43:14.630730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.581 [2024-11-19 12:43:14.630739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.581 [2024-11-19 12:43:14.630748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.581 [2024-11-19 12:43:14.630756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.581 [2024-11-19 12:43:14.630767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.581 [2024-11-19 12:43:14.630775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.581 [2024-11-19 12:43:14.630785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.581 [2024-11-19 12:43:14.630793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.581 [2024-11-19 12:43:14.630802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.581 [2024-11-19 12:43:14.630810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.581 [2024-11-19 12:43:14.630820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:72112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.581 [2024-11-19 12:43:14.630828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.581 [2024-11-19 12:43:14.630838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.581 [2024-11-19 12:43:14.630845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.581 [2024-11-19 12:43:14.630856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.581 [2024-11-19 12:43:14.630865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.581 [2024-11-19 12:43:14.630874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:72136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.581 [2024-11-19 12:43:14.630882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.581 [2024-11-19 12:43:14.630892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:72144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.581 [2024-11-19 12:43:14.630900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.581 [2024-11-19 12:43:14.630910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.582 [2024-11-19 12:43:14.630918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.582 [2024-11-19 12:43:14.630927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.582 [2024-11-19 12:43:14.630935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.582 [2024-11-19 12:43:14.630945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.582 [2024-11-19 12:43:14.630952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.582 [2024-11-19 12:43:14.630962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:72176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.582 [2024-11-19 12:43:14.630970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.582 [2024-11-19 12:43:14.630979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:72184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.582 [2024-11-19 12:43:14.630987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.582 [2024-11-19 12:43:14.630997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.582 [2024-11-19 12:43:14.631005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.582 [2024-11-19 12:43:14.631015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:72200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.582 [2024-11-19 12:43:14.631023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.582 [2024-11-19 12:43:14.631033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.582 [2024-11-19 12:43:14.631041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.582 [2024-11-19 12:43:14.631050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:72216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.582 [2024-11-19 12:43:14.631058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.582 [2024-11-19 12:43:14.631068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.582 [2024-11-19 12:43:14.631076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.582 [2024-11-19 12:43:14.631086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:72232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.582 [2024-11-19 12:43:14.631094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.582 [2024-11-19 12:43:14.631104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.582 [2024-11-19 12:43:14.631112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.582 [2024-11-19 12:43:14.631121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.582 [2024-11-19 12:43:14.631129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.582 [2024-11-19 12:43:14.631139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:72256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.582 [2024-11-19 12:43:14.631146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.582 [2024-11-19 12:43:14.631156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.582 [2024-11-19 12:43:14.631164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.582 [2024-11-19 12:43:14.631173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:72272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.582 [2024-11-19 12:43:14.631182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.582 [2024-11-19 12:43:14.631191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.582 [2024-11-19 12:43:14.631199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.582 [2024-11-19 12:43:14.631208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.582 [2024-11-19 12:43:14.631216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.582 [2024-11-19 12:43:14.631226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.582 [2024-11-19 12:43:14.631234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.582 [2024-11-19 12:43:14.631243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.582 [2024-11-19 12:43:14.631251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.582 [2024-11-19 12:43:14.631260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.582 [2024-11-19 12:43:14.631269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.582 [2024-11-19 12:43:14.631278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.582 [2024-11-19 12:43:14.631286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.582 [2024-11-19 12:43:14.631297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.582 [2024-11-19 12:43:14.631305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.582 [2024-11-19 12:43:14.631316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.582 [2024-11-19 12:43:14.631324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.582 [2024-11-19 12:43:14.631333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.582 [2024-11-19 12:43:14.631367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.582 [2024-11-19 12:43:14.631394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.582 [2024-11-19 12:43:14.631403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.582 [2024-11-19 12:43:14.631413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.582 [2024-11-19 12:43:14.631422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.582 [2024-11-19 12:43:14.631432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.582 [2024-11-19 12:43:14.631440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.582 [2024-11-19 12:43:14.631450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.582 [2024-11-19 12:43:14.631459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.582 [2024-11-19 12:43:14.631469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.582 [2024-11-19 12:43:14.631478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.582 [2024-11-19 12:43:14.631488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.582 [2024-11-19 12:43:14.631496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.582 [2024-11-19 12:43:14.631506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.582 [2024-11-19 12:43:14.631515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.582 [2024-11-19 12:43:14.631525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.582 [2024-11-19 12:43:14.631534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.582 [2024-11-19 12:43:14.631544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:72416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.582 [2024-11-19 12:43:14.631552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.582 [2024-11-19 12:43:14.631563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.582 [2024-11-19 12:43:14.631572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.582 [2024-11-19 12:43:14.631583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.582 [2024-11-19 12:43:14.631591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.582 [2024-11-19 12:43:14.631601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.582 [2024-11-19 12:43:14.631610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.582 [2024-11-19 12:43:14.631620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.582 [2024-11-19 12:43:14.631628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.582 [2024-11-19 12:43:14.631638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.582 [2024-11-19 12:43:14.631647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.582 [2024-11-19 12:43:14.631657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.582 [2024-11-19 12:43:14.631666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.583 [2024-11-19 12:43:14.631692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.583 [2024-11-19 12:43:14.631700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.583 [2024-11-19 12:43:14.632143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.583 [2024-11-19 12:43:14.632211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.583 [2024-11-19 12:43:14.632438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.583 [2024-11-19 12:43:14.632584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.583 [2024-11-19 12:43:14.632704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.583 [2024-11-19 12:43:14.632909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.583 [2024-11-19 12:43:14.632975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.583 [2024-11-19 12:43:14.633026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.583 [2024-11-19 12:43:14.633149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.583 [2024-11-19 12:43:14.633199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.583 [2024-11-19 12:43:14.633320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.583 [2024-11-19 12:43:14.633539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.583 [2024-11-19 12:43:14.633598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.583 [2024-11-19 12:43:14.633797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.583 [2024-11-19 12:43:14.633816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.583 [2024-11-19 12:43:14.633826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.583 [2024-11-19 12:43:14.633837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.583 [2024-11-19 12:43:14.633847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.583 [2024-11-19 12:43:14.633858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.583 [2024-11-19 12:43:14.633867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.583 [2024-11-19 12:43:14.633878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.583 [2024-11-19 12:43:14.633887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.583 [2024-11-19 12:43:14.633898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.583 [2024-11-19 12:43:14.633907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.583 [2024-11-19 12:43:14.633918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.583 [2024-11-19 12:43:14.633927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.583 [2024-11-19 12:43:14.633938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.583 [2024-11-19 12:43:14.633948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.583 [2024-11-19 12:43:14.633959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.583 [2024-11-19 12:43:14.633968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.583 [2024-11-19 12:43:14.633979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.583 [2024-11-19 12:43:14.633988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.583 [2024-11-19 12:43:14.633999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.583 [2024-11-19 12:43:14.634008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.583 [2024-11-19 12:43:14.634019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.583 [2024-11-19 12:43:14.634028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.583 [2024-11-19 12:43:14.634039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.583 [2024-11-19 12:43:14.634047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.583 [2024-11-19 12:43:14.634073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.583 [2024-11-19 12:43:14.634083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.583 [2024-11-19 12:43:14.634094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.583 [2024-11-19 12:43:14.634118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.583 [2024-11-19 12:43:14.634128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.583 [2024-11-19 12:43:14.634137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.583 [2024-11-19 12:43:14.634147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.583 [2024-11-19 12:43:14.634156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.583 [2024-11-19 12:43:14.634166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.583 [2024-11-19 12:43:14.634174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.583 [2024-11-19 12:43:14.634184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:72672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.583 [2024-11-19 12:43:14.634193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.583 [2024-11-19 12:43:14.634203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.583 [2024-11-19 12:43:14.634212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.583 [2024-11-19 12:43:14.634222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:72688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.583 [2024-11-19 12:43:14.634230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.583 [2024-11-19 12:43:14.634240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:72696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.583 [2024-11-19 12:43:14.634249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.583 [2024-11-19 12:43:14.634259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.583 [2024-11-19 12:43:14.634267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.583 [2024-11-19 12:43:14.634278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.583 [2024-11-19 12:43:14.634286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.583 [2024-11-19 12:43:14.634296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.583 [2024-11-19 12:43:14.634305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.583 [2024-11-19 12:43:14.634314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.583 [2024-11-19 12:43:14.634323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.583 [2024-11-19 12:43:14.634334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.583 [2024-11-19 12:43:14.634342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.583 [2024-11-19 12:43:14.634359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.583 [2024-11-19 12:43:14.634368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.583 [2024-11-19 12:43:14.634378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.583 [2024-11-19 12:43:14.634386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.583 [2024-11-19 12:43:14.634397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.583 [2024-11-19 12:43:14.634405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.583 [2024-11-19 12:43:14.634415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:72768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.583 [2024-11-19 12:43:14.634423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.583 [2024-11-19 12:43:14.634434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.583 [2024-11-19 12:43:14.634442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.583 [2024-11-19 12:43:14.634452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.584 [2024-11-19 12:43:14.634460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.584 [2024-11-19 12:43:14.634470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.584 [2024-11-19 12:43:14.634479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.584 [2024-11-19 12:43:14.634489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.584 [2024-11-19 12:43:14.634497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.584 [2024-11-19 12:43:14.634508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.584 [2024-11-19 12:43:14.634516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.584 [2024-11-19 12:43:14.634526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.584 [2024-11-19 12:43:14.634535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.584 [2024-11-19 12:43:14.634545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.584 [2024-11-19 12:43:14.634553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.584 [2024-11-19 12:43:14.634563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.584 [2024-11-19 12:43:14.634572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.584 [2024-11-19 12:43:14.634583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.584 [2024-11-19 12:43:14.634591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.584 [2024-11-19 12:43:14.634601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:71848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.584 [2024-11-19 12:43:14.634609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.584 [2024-11-19 12:43:14.634620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:71856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.584 [2024-11-19 12:43:14.634628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.584 [2024-11-19 12:43:14.634638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:71864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.584 [2024-11-19 12:43:14.634647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.584 [2024-11-19 12:43:14.634659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.584 [2024-11-19 12:43:14.634667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.584 [2024-11-19 12:43:14.634678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:71880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.584 [2024-11-19 12:43:14.634686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.584 [2024-11-19 12:43:14.634711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:71888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.584 [2024-11-19 12:43:14.634723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.584 [2024-11-19 12:43:14.634733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:71896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.584 [2024-11-19 12:43:14.634742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.584 [2024-11-19 12:43:14.634752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:71904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.584 [2024-11-19 12:43:14.634761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.584 [2024-11-19 12:43:14.634771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:71912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.584 [2024-11-19 12:43:14.634779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.584 [2024-11-19 12:43:14.634789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:71920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.584 [2024-11-19 12:43:14.634798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.584 [2024-11-19 12:43:14.634808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:71928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.584 [2024-11-19 12:43:14.634817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.584 [2024-11-19 12:43:14.634828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:71936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.584 [2024-11-19 12:43:14.634837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.584 [2024-11-19 12:43:14.634847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:71944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.584 [2024-11-19 12:43:14.634855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.584 [2024-11-19 12:43:14.634866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:71952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.584 [2024-11-19 12:43:14.634874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.584 [2024-11-19 12:43:14.634884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:71960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.584 [2024-11-19 12:43:14.634892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.584 [2024-11-19 12:43:14.634903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.584 [2024-11-19 12:43:14.634911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.584 [2024-11-19 12:43:14.634920] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f653f0 is same with the state(6) to be set 00:23:09.584 [2024-11-19 12:43:14.634932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.584 [2024-11-19 12:43:14.634939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.584 [2024-11-19 12:43:14.634947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72856 len:8 PRP1 0x0 PRP2 0x0 00:23:09.584 [2024-11-19 12:43:14.634955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.584 [2024-11-19 12:43:14.634997] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f653f0 was disconnected and freed. reset controller. 00:23:09.584 [2024-11-19 12:43:14.635233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.584 [2024-11-19 12:43:14.635256] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42500 (9): Bad file descriptor 00:23:09.584 [2024-11-19 12:43:14.635405] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.584 [2024-11-19 12:43:14.635431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42500 with addr=10.0.0.3, port=4420 00:23:09.584 [2024-11-19 12:43:14.635443] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42500 is same with the state(6) to be set 00:23:09.584 [2024-11-19 12:43:14.635461] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42500 (9): Bad file descriptor 00:23:09.584 [2024-11-19 12:43:14.635477] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.584 [2024-11-19 12:43:14.635487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.584 [2024-11-19 12:43:14.635497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.584 [2024-11-19 12:43:14.635518] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.584 [2024-11-19 12:43:14.635528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.584 12:43:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:23:10.521 4490.00 IOPS, 17.54 MiB/s [2024-11-19T12:43:15.781Z] [2024-11-19 12:43:15.635625] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.521 [2024-11-19 12:43:15.635920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42500 with addr=10.0.0.3, port=4420 00:23:10.521 [2024-11-19 12:43:15.635944] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42500 is same with the state(6) to be set 00:23:10.521 [2024-11-19 12:43:15.635968] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42500 (9): Bad file descriptor 00:23:10.521 [2024-11-19 12:43:15.635986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.521 [2024-11-19 12:43:15.635997] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.521 [2024-11-19 12:43:15.636007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.521 [2024-11-19 12:43:15.636032] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.522 [2024-11-19 12:43:15.636042] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.460 2993.33 IOPS, 11.69 MiB/s [2024-11-19T12:43:16.720Z] [2024-11-19 12:43:16.636135] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.460 [2024-11-19 12:43:16.636190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42500 with addr=10.0.0.3, port=4420 00:23:11.460 [2024-11-19 12:43:16.636210] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42500 is same with the state(6) to be set 00:23:11.460 [2024-11-19 12:43:16.636227] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42500 (9): Bad file descriptor 00:23:11.460 [2024-11-19 12:43:16.636241] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.460 [2024-11-19 12:43:16.636249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.460 [2024-11-19 12:43:16.636258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.460 [2024-11-19 12:43:16.636278] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.460 [2024-11-19 12:43:16.636288] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.398 2245.00 IOPS, 8.77 MiB/s [2024-11-19T12:43:17.658Z] [2024-11-19 12:43:17.639165] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.398 [2024-11-19 12:43:17.639224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f42500 with addr=10.0.0.3, port=4420 00:23:12.398 [2024-11-19 12:43:17.639238] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f42500 is same with the state(6) to be set 00:23:12.398 [2024-11-19 12:43:17.639498] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f42500 (9): Bad file descriptor 00:23:12.398 [2024-11-19 12:43:17.639800] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.398 [2024-11-19 12:43:17.639815] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.398 [2024-11-19 12:43:17.639825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.398 [2024-11-19 12:43:17.643280] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.398 [2024-11-19 12:43:17.643307] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.398 12:43:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:12.966 [2024-11-19 12:43:17.940237] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:12.966 12:43:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 97588 00:23:13.533 1796.00 IOPS, 7.02 MiB/s [2024-11-19T12:43:18.793Z] [2024-11-19 12:43:18.674258] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:15.408 2971.17 IOPS, 11.61 MiB/s [2024-11-19T12:43:21.605Z] 4083.86 IOPS, 15.95 MiB/s [2024-11-19T12:43:22.542Z] 4940.38 IOPS, 19.30 MiB/s [2024-11-19T12:43:23.921Z] 5603.89 IOPS, 21.89 MiB/s [2024-11-19T12:43:23.921Z] 6142.90 IOPS, 24.00 MiB/s 00:23:18.661 Latency(us) 00:23:18.661 [2024-11-19T12:43:23.921Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.661 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:18.661 Verification LBA range: start 0x0 length 0x4000 00:23:18.661 NVMe0n1 : 10.01 6147.62 24.01 4207.97 0.00 12335.33 558.55 3019898.88 00:23:18.661 [2024-11-19T12:43:23.921Z] =================================================================================================================== 00:23:18.661 [2024-11-19T12:43:23.921Z] Total : 6147.62 24.01 4207.97 0.00 12335.33 0.00 3019898.88 00:23:18.661 { 00:23:18.661 "results": [ 00:23:18.661 { 00:23:18.661 "job": "NVMe0n1", 00:23:18.661 "core_mask": "0x4", 00:23:18.661 "workload": "verify", 00:23:18.661 "status": "finished", 00:23:18.661 "verify_range": { 00:23:18.661 "start": 0, 00:23:18.661 "length": 16384 00:23:18.661 }, 00:23:18.661 "queue_depth": 128, 00:23:18.661 "io_size": 4096, 00:23:18.661 "runtime": 10.006965, 00:23:18.661 "iops": 6147.618183934889, 00:23:18.661 "mibps": 24.01413353099566, 00:23:18.661 "io_failed": 42109, 00:23:18.661 "io_timeout": 0, 00:23:18.661 "avg_latency_us": 12335.331914084296, 00:23:18.661 "min_latency_us": 558.5454545454545, 00:23:18.661 "max_latency_us": 3019898.88 00:23:18.661 } 00:23:18.661 ], 00:23:18.661 "core_count": 1 00:23:18.661 } 00:23:18.661 12:43:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 97467 00:23:18.661 12:43:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 97467 ']' 00:23:18.661 12:43:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 97467 00:23:18.661 12:43:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:23:18.661 12:43:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:18.661 12:43:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97467 00:23:18.661 killing process with pid 97467 00:23:18.661 Received shutdown signal, test time was about 10.000000 seconds 00:23:18.661 00:23:18.661 Latency(us) 00:23:18.661 [2024-11-19T12:43:23.921Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.661 [2024-11-19T12:43:23.921Z] =================================================================================================================== 00:23:18.661 [2024-11-19T12:43:23.921Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:18.661 12:43:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:18.661 12:43:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:18.661 12:43:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97467' 00:23:18.661 12:43:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 97467 00:23:18.661 12:43:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 97467 00:23:18.662 12:43:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:23:18.662 12:43:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=97702 00:23:18.662 12:43:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 97702 /var/tmp/bdevperf.sock 00:23:18.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:18.662 12:43:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 97702 ']' 00:23:18.662 12:43:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:18.662 12:43:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:18.662 12:43:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:18.662 12:43:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:18.662 12:43:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:18.662 [2024-11-19 12:43:23.751468] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:18.662 [2024-11-19 12:43:23.752298] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97702 ] 00:23:18.662 [2024-11-19 12:43:23.887797] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.921 [2024-11-19 12:43:23.921430] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:18.921 [2024-11-19 12:43:23.949336] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:18.921 12:43:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:18.921 12:43:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:23:18.921 12:43:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=97705 00:23:18.921 12:43:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97702 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:23:18.921 12:43:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:23:19.180 12:43:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:23:19.439 NVMe0n1 00:23:19.439 12:43:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=97747 00:23:19.439 12:43:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:19.439 12:43:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:23:19.698 Running I/O for 10 seconds... 00:23:20.639 12:43:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:20.639 17399.00 IOPS, 67.96 MiB/s [2024-11-19T12:43:25.899Z] [2024-11-19 12:43:25.863882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.863929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.863956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.863964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.863972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.863979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.863986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.863993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.864001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.864008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.864016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.864023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.864031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.864038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.864060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.864083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.864090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.864096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.864103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.864110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.864117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.864124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.864130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.864137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.864144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.864151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.864157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.864164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.864178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.864185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.864192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.864199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.864205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.864212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.864220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.864227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.864234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.864241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.864247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.864254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.864261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.864268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.864274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.864281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.864288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.864295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.864302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.864309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.864315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.639 [2024-11-19 12:43:25.864322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.640 [2024-11-19 12:43:25.864329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.640 [2024-11-19 12:43:25.864336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.640 [2024-11-19 12:43:25.864342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.640 [2024-11-19 12:43:25.864349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.640 [2024-11-19 12:43:25.864356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.640 [2024-11-19 12:43:25.864363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.640 [2024-11-19 12:43:25.864370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.640 [2024-11-19 12:43:25.864377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.640 [2024-11-19 12:43:25.864383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.640 [2024-11-19 12:43:25.864390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019dd0 is same with the state(6) to be set 00:23:20.640 [2024-11-19 12:43:25.864661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:130480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.640 [2024-11-19 12:43:25.864688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.640 [2024-11-19 12:43:25.864706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.640 [2024-11-19 12:43:25.864715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.640 [2024-11-19 12:43:25.864755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.640 [2024-11-19 12:43:25.864766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.640 [2024-11-19 12:43:25.864776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.640 [2024-11-19 12:43:25.864784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.640 [2024-11-19 12:43:25.864794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:54760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.640 [2024-11-19 12:43:25.864803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.640 [2024-11-19 12:43:25.864812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:82064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.640 [2024-11-19 12:43:25.864820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.640 [2024-11-19 12:43:25.864846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:68864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.640 [2024-11-19 12:43:25.864854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.640 [2024-11-19 12:43:25.864865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.640 [2024-11-19 12:43:25.864874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.640 [2024-11-19 12:43:25.864884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:100992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.640 [2024-11-19 12:43:25.864892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.640 [2024-11-19 12:43:25.864902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:43992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.640 [2024-11-19 12:43:25.864910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.640 [2024-11-19 12:43:25.864920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:105656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.640 [2024-11-19 12:43:25.864928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.640 [2024-11-19 12:43:25.864938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:32424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.640 [2024-11-19 12:43:25.864946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.640 [2024-11-19 12:43:25.864956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:80320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.640 [2024-11-19 12:43:25.864964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.640 [2024-11-19 12:43:25.864973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:52640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.640 [2024-11-19 12:43:25.864981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.640 [2024-11-19 12:43:25.865007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.640 [2024-11-19 12:43:25.865030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.640 [2024-11-19 12:43:25.865041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.640 [2024-11-19 12:43:25.865050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.640 [2024-11-19 12:43:25.865060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:81776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.640 [2024-11-19 12:43:25.865069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.640 [2024-11-19 12:43:25.865079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:25472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.640 [2024-11-19 12:43:25.865088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.640 [2024-11-19 12:43:25.865098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:36896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.640 [2024-11-19 12:43:25.865107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.640 [2024-11-19 12:43:25.865117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:100848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.640 [2024-11-19 12:43:25.865126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.640 [2024-11-19 12:43:25.865136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:66768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.640 [2024-11-19 12:43:25.865144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.640 [2024-11-19 12:43:25.865155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.640 [2024-11-19 12:43:25.865163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.640 [2024-11-19 12:43:25.865173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:109160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.640 [2024-11-19 12:43:25.865182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.640 [2024-11-19 12:43:25.865192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:64808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.640 [2024-11-19 12:43:25.865201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.640 [2024-11-19 12:43:25.865211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:110752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.640 [2024-11-19 12:43:25.865220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.640 [2024-11-19 12:43:25.865231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:39296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.640 [2024-11-19 12:43:25.865240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.640 [2024-11-19 12:43:25.865251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:111248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.640 [2024-11-19 12:43:25.865259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.640 [2024-11-19 12:43:25.865270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.640 [2024-11-19 12:43:25.865279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.640 [2024-11-19 12:43:25.865289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:46880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.640 [2024-11-19 12:43:25.865298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.640 [2024-11-19 12:43:25.865308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:72824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.640 [2024-11-19 12:43:25.865317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.640 [2024-11-19 12:43:25.865327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:68000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.640 [2024-11-19 12:43:25.865336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.641 [2024-11-19 12:43:25.865346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:48056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.641 [2024-11-19 12:43:25.865355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.641 [2024-11-19 12:43:25.865366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:109088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.641 [2024-11-19 12:43:25.865375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.641 [2024-11-19 12:43:25.865386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.641 [2024-11-19 12:43:25.865394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.641 [2024-11-19 12:43:25.865404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.641 [2024-11-19 12:43:25.865413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.641 [2024-11-19 12:43:25.865423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:53856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.641 [2024-11-19 12:43:25.865432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.641 [2024-11-19 12:43:25.865443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.641 [2024-11-19 12:43:25.865451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.641 [2024-11-19 12:43:25.865461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:54408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.641 [2024-11-19 12:43:25.865470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.641 [2024-11-19 12:43:25.865480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:37896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.641 [2024-11-19 12:43:25.865488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.641 [2024-11-19 12:43:25.865499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:102392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.641 [2024-11-19 12:43:25.865507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.641 [2024-11-19 12:43:25.865518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:61464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.641 [2024-11-19 12:43:25.865526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.641 [2024-11-19 12:43:25.865537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:116064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.641 [2024-11-19 12:43:25.865545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.641 [2024-11-19 12:43:25.865557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.641 [2024-11-19 12:43:25.865566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.641 [2024-11-19 12:43:25.865577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:109024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.641 [2024-11-19 12:43:25.865585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.641 [2024-11-19 12:43:25.865596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.641 [2024-11-19 12:43:25.865604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.641 [2024-11-19 12:43:25.865615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:107472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.641 [2024-11-19 12:43:25.865624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.641 [2024-11-19 12:43:25.865634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:49096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.641 [2024-11-19 12:43:25.865643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.641 [2024-11-19 12:43:25.865653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.641 [2024-11-19 12:43:25.865662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.641 [2024-11-19 12:43:25.865672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:40632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.641 [2024-11-19 12:43:25.865681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.641 [2024-11-19 12:43:25.865692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:104816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.641 [2024-11-19 12:43:25.865700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.641 [2024-11-19 12:43:25.865719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:57632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.641 [2024-11-19 12:43:25.865729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.641 [2024-11-19 12:43:25.865740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:100280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.641 [2024-11-19 12:43:25.865749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.641 [2024-11-19 12:43:25.865760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.641 [2024-11-19 12:43:25.865768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.641 [2024-11-19 12:43:25.865779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:127344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.641 [2024-11-19 12:43:25.865788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.641 [2024-11-19 12:43:25.865798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.641 [2024-11-19 12:43:25.865807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.641 [2024-11-19 12:43:25.865818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:67048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.641 [2024-11-19 12:43:25.865826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.641 [2024-11-19 12:43:25.865837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:84480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.641 [2024-11-19 12:43:25.865846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.641 [2024-11-19 12:43:25.865856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.641 [2024-11-19 12:43:25.865865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.641 [2024-11-19 12:43:25.865875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.641 [2024-11-19 12:43:25.865883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.641 [2024-11-19 12:43:25.865894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:46328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.641 [2024-11-19 12:43:25.865902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.641 [2024-11-19 12:43:25.865913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.641 [2024-11-19 12:43:25.865921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.641 [2024-11-19 12:43:25.865931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.642 [2024-11-19 12:43:25.865940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.642 [2024-11-19 12:43:25.865950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:60344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.642 [2024-11-19 12:43:25.865959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.642 [2024-11-19 12:43:25.865969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:31192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.642 [2024-11-19 12:43:25.865979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.642 [2024-11-19 12:43:25.865990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:56880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.642 [2024-11-19 12:43:25.865999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.642 [2024-11-19 12:43:25.866010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:130424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.642 [2024-11-19 12:43:25.866018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.642 [2024-11-19 12:43:25.866029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.642 [2024-11-19 12:43:25.866037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.642 [2024-11-19 12:43:25.866048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:127704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.642 [2024-11-19 12:43:25.866056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.642 [2024-11-19 12:43:25.866067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.642 [2024-11-19 12:43:25.866075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.642 [2024-11-19 12:43:25.866086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:97360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.642 [2024-11-19 12:43:25.866094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.642 [2024-11-19 12:43:25.866105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.642 [2024-11-19 12:43:25.866113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.642 [2024-11-19 12:43:25.866123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:67560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.642 [2024-11-19 12:43:25.866132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.642 [2024-11-19 12:43:25.866142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.642 [2024-11-19 12:43:25.866150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.642 [2024-11-19 12:43:25.866162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:105400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.642 [2024-11-19 12:43:25.866170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.642 [2024-11-19 12:43:25.866181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:90944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.642 [2024-11-19 12:43:25.866189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.642 [2024-11-19 12:43:25.866200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:105072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.642 [2024-11-19 12:43:25.866208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.642 [2024-11-19 12:43:25.866219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:87320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.642 [2024-11-19 12:43:25.866227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.642 [2024-11-19 12:43:25.866238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.642 [2024-11-19 12:43:25.866248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.642 [2024-11-19 12:43:25.866258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:114288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.642 [2024-11-19 12:43:25.866267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.642 [2024-11-19 12:43:25.866277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.642 [2024-11-19 12:43:25.866286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.642 [2024-11-19 12:43:25.866297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:125328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.642 [2024-11-19 12:43:25.866307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.642 [2024-11-19 12:43:25.866317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:106048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.642 [2024-11-19 12:43:25.866325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.642 [2024-11-19 12:43:25.866336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:76344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.642 [2024-11-19 12:43:25.866344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.642 [2024-11-19 12:43:25.866355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.642 [2024-11-19 12:43:25.866364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.642 [2024-11-19 12:43:25.866375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:73200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.642 [2024-11-19 12:43:25.866383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.642 [2024-11-19 12:43:25.866394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:69160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.642 [2024-11-19 12:43:25.866403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.642 [2024-11-19 12:43:25.866414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.642 [2024-11-19 12:43:25.866422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.642 [2024-11-19 12:43:25.866433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:122632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.642 [2024-11-19 12:43:25.866442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.642 [2024-11-19 12:43:25.866452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:28936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.642 [2024-11-19 12:43:25.866461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.642 [2024-11-19 12:43:25.866471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.642 [2024-11-19 12:43:25.866480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.642 [2024-11-19 12:43:25.866490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:102704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.642 [2024-11-19 12:43:25.866499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.642 [2024-11-19 12:43:25.866509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.642 [2024-11-19 12:43:25.866518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.642 [2024-11-19 12:43:25.866528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:61200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.642 [2024-11-19 12:43:25.866537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.642 [2024-11-19 12:43:25.866547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:42632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.642 [2024-11-19 12:43:25.866560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.642 [2024-11-19 12:43:25.866571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:43240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.642 [2024-11-19 12:43:25.866579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.642 [2024-11-19 12:43:25.866590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:72264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.642 [2024-11-19 12:43:25.866599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.642 [2024-11-19 12:43:25.866609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.642 [2024-11-19 12:43:25.866618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.643 [2024-11-19 12:43:25.866628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:29880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.643 [2024-11-19 12:43:25.866637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.643 [2024-11-19 12:43:25.866647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:43600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.643 [2024-11-19 12:43:25.866656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.643 [2024-11-19 12:43:25.866675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.643 [2024-11-19 12:43:25.866685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.643 [2024-11-19 12:43:25.866696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:73880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.643 [2024-11-19 12:43:25.866705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.643 [2024-11-19 12:43:25.866715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:49144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.643 [2024-11-19 12:43:25.866726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.643 [2024-11-19 12:43:25.866737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:84320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.643 [2024-11-19 12:43:25.866745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.643 [2024-11-19 12:43:25.866756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.643 [2024-11-19 12:43:25.866764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.643 [2024-11-19 12:43:25.866775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.643 [2024-11-19 12:43:25.866783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.643 [2024-11-19 12:43:25.866794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.643 [2024-11-19 12:43:25.866803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.643 [2024-11-19 12:43:25.866813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:73640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.643 [2024-11-19 12:43:25.866822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.643 [2024-11-19 12:43:25.866833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:30792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.643 [2024-11-19 12:43:25.866841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.643 [2024-11-19 12:43:25.866852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:25256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.643 [2024-11-19 12:43:25.866860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.643 [2024-11-19 12:43:25.866871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.643 [2024-11-19 12:43:25.866881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.643 [2024-11-19 12:43:25.866892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.643 [2024-11-19 12:43:25.866900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.643 [2024-11-19 12:43:25.866911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:86496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.643 [2024-11-19 12:43:25.866919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.643 [2024-11-19 12:43:25.866930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:75776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.643 [2024-11-19 12:43:25.866938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.643 [2024-11-19 12:43:25.866948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:39400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.643 [2024-11-19 12:43:25.866957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.643 [2024-11-19 12:43:25.866968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.643 [2024-11-19 12:43:25.866976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.643 [2024-11-19 12:43:25.866987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:110920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.643 [2024-11-19 12:43:25.866995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.643 [2024-11-19 12:43:25.867006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:73056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.643 [2024-11-19 12:43:25.867014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.643 [2024-11-19 12:43:25.867025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:60088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.643 [2024-11-19 12:43:25.867035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.643 [2024-11-19 12:43:25.867046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:36704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.643 [2024-11-19 12:43:25.867055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.643 [2024-11-19 12:43:25.867066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:102520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.643 [2024-11-19 12:43:25.867074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.643 [2024-11-19 12:43:25.867085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:101792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.643 [2024-11-19 12:43:25.867093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.643 [2024-11-19 12:43:25.867104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:110800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.643 [2024-11-19 12:43:25.867113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.643 [2024-11-19 12:43:25.867123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.643 [2024-11-19 12:43:25.867131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.643 [2024-11-19 12:43:25.867142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:33472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.643 [2024-11-19 12:43:25.867150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.643 [2024-11-19 12:43:25.867160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:119016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.643 [2024-11-19 12:43:25.867169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.643 [2024-11-19 12:43:25.867180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:115272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.643 [2024-11-19 12:43:25.867190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.643 [2024-11-19 12:43:25.867201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:26928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.643 [2024-11-19 12:43:25.867209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.643 [2024-11-19 12:43:25.867219] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6adb80 is same with the state(6) to be set 00:23:20.643 [2024-11-19 12:43:25.867231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.643 [2024-11-19 12:43:25.867238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.643 [2024-11-19 12:43:25.867246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34064 len:8 PRP1 0x0 PRP2 0x0 00:23:20.643 [2024-11-19 12:43:25.867254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.643 [2024-11-19 12:43:25.867295] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6adb80 was disconnected and freed. reset controller. 00:23:20.643 [2024-11-19 12:43:25.867622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:20.643 [2024-11-19 12:43:25.867730] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68d500 (9): Bad file descriptor 00:23:20.643 [2024-11-19 12:43:25.867864] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.643 [2024-11-19 12:43:25.867886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x68d500 with addr=10.0.0.3, port=4420 00:23:20.644 [2024-11-19 12:43:25.867897] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68d500 is same with the state(6) to be set 00:23:20.644 [2024-11-19 12:43:25.867930] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68d500 (9): Bad file descriptor 00:23:20.644 [2024-11-19 12:43:25.867947] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:20.644 [2024-11-19 12:43:25.867959] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:20.644 [2024-11-19 12:43:25.867970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:20.644 [2024-11-19 12:43:25.867992] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:20.644 [2024-11-19 12:43:25.868002] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:20.644 12:43:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 97747 00:23:22.517 10002.50 IOPS, 39.07 MiB/s [2024-11-19T12:43:28.036Z] 6668.33 IOPS, 26.05 MiB/s [2024-11-19T12:43:28.036Z] [2024-11-19 12:43:27.868134] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:22.776 [2024-11-19 12:43:27.868361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x68d500 with addr=10.0.0.3, port=4420 00:23:22.776 [2024-11-19 12:43:27.868386] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68d500 is same with the state(6) to be set 00:23:22.776 [2024-11-19 12:43:27.868427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68d500 (9): Bad file descriptor 00:23:22.776 [2024-11-19 12:43:27.868448] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:22.776 [2024-11-19 12:43:27.868457] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:22.776 [2024-11-19 12:43:27.868468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:22.776 [2024-11-19 12:43:27.868491] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:22.776 [2024-11-19 12:43:27.868503] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:24.649 5001.25 IOPS, 19.54 MiB/s [2024-11-19T12:43:29.909Z] 4001.00 IOPS, 15.63 MiB/s [2024-11-19T12:43:29.909Z] [2024-11-19 12:43:29.868639] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.649 [2024-11-19 12:43:29.868860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x68d500 with addr=10.0.0.3, port=4420 00:23:24.649 [2024-11-19 12:43:29.869006] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68d500 is same with the state(6) to be set 00:23:24.649 [2024-11-19 12:43:29.869171] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68d500 (9): Bad file descriptor 00:23:24.649 [2024-11-19 12:43:29.869344] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:24.649 [2024-11-19 12:43:29.869366] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:24.649 [2024-11-19 12:43:29.869378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:24.649 [2024-11-19 12:43:29.869403] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:24.649 [2024-11-19 12:43:29.869414] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:26.523 3334.17 IOPS, 13.02 MiB/s [2024-11-19T12:43:32.043Z] 2857.86 IOPS, 11.16 MiB/s [2024-11-19T12:43:32.043Z] [2024-11-19 12:43:31.869472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:26.783 [2024-11-19 12:43:31.869511] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:26.783 [2024-11-19 12:43:31.869537] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:26.783 [2024-11-19 12:43:31.869546] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:23:26.783 [2024-11-19 12:43:31.869569] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:27.720 2500.62 IOPS, 9.77 MiB/s 00:23:27.721 Latency(us) 00:23:27.721 [2024-11-19T12:43:32.981Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.721 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:23:27.721 NVMe0n1 : 8.16 2451.27 9.58 15.68 0.00 51832.94 6821.70 7015926.69 00:23:27.721 [2024-11-19T12:43:32.981Z] =================================================================================================================== 00:23:27.721 [2024-11-19T12:43:32.981Z] Total : 2451.27 9.58 15.68 0.00 51832.94 6821.70 7015926.69 00:23:27.721 { 00:23:27.721 "results": [ 00:23:27.721 { 00:23:27.721 "job": "NVMe0n1", 00:23:27.721 "core_mask": "0x4", 00:23:27.721 "workload": "randread", 00:23:27.721 "status": "finished", 00:23:27.721 "queue_depth": 128, 00:23:27.721 "io_size": 4096, 00:23:27.721 "runtime": 8.161081, 00:23:27.721 "iops": 2451.268404271444, 00:23:27.721 "mibps": 9.575267204185328, 00:23:27.721 "io_failed": 128, 00:23:27.721 "io_timeout": 0, 00:23:27.721 "avg_latency_us": 51832.943461255374, 00:23:27.721 "min_latency_us": 6821.701818181818, 00:23:27.721 "max_latency_us": 7015926.69090909 00:23:27.721 } 00:23:27.721 ], 00:23:27.721 "core_count": 1 00:23:27.721 } 00:23:27.721 12:43:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:27.721 Attaching 5 probes... 00:23:27.721 1348.508537: reset bdev controller NVMe0 00:23:27.721 1348.711071: reconnect bdev controller NVMe0 00:23:27.721 3348.948398: reconnect delay bdev controller NVMe0 00:23:27.721 3348.981436: reconnect bdev controller NVMe0 00:23:27.721 5349.456223: reconnect delay bdev controller NVMe0 00:23:27.721 5349.488754: reconnect bdev controller NVMe0 00:23:27.721 7350.360119: reconnect delay bdev controller NVMe0 00:23:27.721 7350.391912: reconnect bdev controller NVMe0 00:23:27.721 12:43:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:23:27.721 12:43:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:23:27.721 12:43:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 97705 00:23:27.721 12:43:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:27.721 12:43:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 97702 00:23:27.721 12:43:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 97702 ']' 00:23:27.721 12:43:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 97702 00:23:27.721 12:43:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:23:27.721 12:43:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:27.721 12:43:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97702 00:23:27.721 killing process with pid 97702 00:23:27.721 Received shutdown signal, test time was about 8.234570 seconds 00:23:27.721 00:23:27.721 Latency(us) 00:23:27.721 [2024-11-19T12:43:32.981Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.721 [2024-11-19T12:43:32.981Z] =================================================================================================================== 00:23:27.721 [2024-11-19T12:43:32.981Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:27.721 12:43:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:27.721 12:43:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:27.721 12:43:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97702' 00:23:27.721 12:43:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 97702 00:23:27.721 12:43:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 97702 00:23:27.981 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:28.240 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:23:28.240 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:23:28.240 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:28.240 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:23:28.240 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:28.240 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:23:28.240 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:28.240 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:28.240 rmmod nvme_tcp 00:23:28.240 rmmod nvme_fabrics 00:23:28.240 rmmod nvme_keyring 00:23:28.240 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:28.240 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:23:28.240 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:23:28.240 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@513 -- # '[' -n 97280 ']' 00:23:28.240 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@514 -- # killprocess 97280 00:23:28.240 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 97280 ']' 00:23:28.240 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 97280 00:23:28.240 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:23:28.240 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:28.240 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97280 00:23:28.240 killing process with pid 97280 00:23:28.240 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:28.240 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:28.240 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97280' 00:23:28.240 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 97280 00:23:28.240 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 97280 00:23:28.499 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:28.500 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:28.500 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:28.500 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:23:28.500 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@787 -- # iptables-save 00:23:28.500 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:28.500 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@787 -- # iptables-restore 00:23:28.500 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:28.500 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:28.500 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:28.500 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:28.500 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:28.500 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:28.500 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:28.500 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:28.500 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:28.500 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:28.500 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:28.759 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:28.759 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:28.759 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:28.759 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:28.759 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:28.759 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.759 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:28.759 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:28.759 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:23:28.759 00:23:28.759 real 0m45.033s 00:23:28.759 user 2m12.276s 00:23:28.759 sys 0m5.137s 00:23:28.759 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:28.759 12:43:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:28.759 ************************************ 00:23:28.759 END TEST nvmf_timeout 00:23:28.759 ************************************ 00:23:28.759 12:43:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:23:28.759 12:43:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:23:28.759 00:23:28.759 real 5m41.486s 00:23:28.759 user 16m1.287s 00:23:28.759 sys 1m15.943s 00:23:28.759 12:43:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:28.759 ************************************ 00:23:28.759 12:43:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.759 END TEST nvmf_host 00:23:28.759 ************************************ 00:23:28.759 12:43:33 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:23:28.759 12:43:33 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:23:28.759 00:23:28.759 real 15m1.296s 00:23:28.759 user 39m35.406s 00:23:28.759 sys 4m0.475s 00:23:28.759 12:43:33 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:28.759 12:43:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:28.759 ************************************ 00:23:28.759 END TEST nvmf_tcp 00:23:28.759 ************************************ 00:23:28.759 12:43:33 -- spdk/autotest.sh@281 -- # [[ 1 -eq 0 ]] 00:23:28.759 12:43:33 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:23:28.759 12:43:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:28.759 12:43:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:28.759 12:43:33 -- common/autotest_common.sh@10 -- # set +x 00:23:28.759 ************************************ 00:23:28.759 START TEST nvmf_dif 00:23:28.759 ************************************ 00:23:28.759 12:43:34 nvmf_dif -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:23:29.019 * Looking for test storage... 00:23:29.019 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:29.019 12:43:34 nvmf_dif -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:29.019 12:43:34 nvmf_dif -- common/autotest_common.sh@1681 -- # lcov --version 00:23:29.019 12:43:34 nvmf_dif -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:29.019 12:43:34 nvmf_dif -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:29.019 12:43:34 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:29.019 12:43:34 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:29.019 12:43:34 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:29.019 12:43:34 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:23:29.019 12:43:34 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:23:29.019 12:43:34 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:23:29.019 12:43:34 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:23:29.019 12:43:34 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:23:29.019 12:43:34 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:23:29.019 12:43:34 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:23:29.019 12:43:34 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:29.019 12:43:34 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:23:29.019 12:43:34 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:23:29.019 12:43:34 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:29.019 12:43:34 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:29.019 12:43:34 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:23:29.019 12:43:34 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:23:29.019 12:43:34 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:29.019 12:43:34 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:23:29.019 12:43:34 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:23:29.019 12:43:34 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:23:29.019 12:43:34 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:23:29.019 12:43:34 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:29.019 12:43:34 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:23:29.019 12:43:34 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:23:29.019 12:43:34 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:29.019 12:43:34 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:29.019 12:43:34 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:23:29.019 12:43:34 nvmf_dif -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:29.019 12:43:34 nvmf_dif -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:29.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.019 --rc genhtml_branch_coverage=1 00:23:29.019 --rc genhtml_function_coverage=1 00:23:29.019 --rc genhtml_legend=1 00:23:29.019 --rc geninfo_all_blocks=1 00:23:29.019 --rc geninfo_unexecuted_blocks=1 00:23:29.019 00:23:29.019 ' 00:23:29.019 12:43:34 nvmf_dif -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:29.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.019 --rc genhtml_branch_coverage=1 00:23:29.019 --rc genhtml_function_coverage=1 00:23:29.019 --rc genhtml_legend=1 00:23:29.019 --rc geninfo_all_blocks=1 00:23:29.019 --rc geninfo_unexecuted_blocks=1 00:23:29.019 00:23:29.019 ' 00:23:29.019 12:43:34 nvmf_dif -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:29.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.019 --rc genhtml_branch_coverage=1 00:23:29.019 --rc genhtml_function_coverage=1 00:23:29.019 --rc genhtml_legend=1 00:23:29.019 --rc geninfo_all_blocks=1 00:23:29.019 --rc geninfo_unexecuted_blocks=1 00:23:29.019 00:23:29.019 ' 00:23:29.019 12:43:34 nvmf_dif -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:29.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.019 --rc genhtml_branch_coverage=1 00:23:29.019 --rc genhtml_function_coverage=1 00:23:29.019 --rc genhtml_legend=1 00:23:29.019 --rc geninfo_all_blocks=1 00:23:29.019 --rc geninfo_unexecuted_blocks=1 00:23:29.019 00:23:29.019 ' 00:23:29.019 12:43:34 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:29.019 12:43:34 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:23:29.019 12:43:34 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:29.019 12:43:34 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:29.019 12:43:34 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:29.019 12:43:34 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:29.019 12:43:34 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:29.019 12:43:34 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:29.019 12:43:34 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:29.019 12:43:34 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:29.019 12:43:34 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:29.019 12:43:34 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:29.019 12:43:34 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:23:29.019 12:43:34 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:23:29.019 12:43:34 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:29.019 12:43:34 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:29.019 12:43:34 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:29.019 12:43:34 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:29.019 12:43:34 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:29.019 12:43:34 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:23:29.019 12:43:34 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:29.019 12:43:34 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:29.019 12:43:34 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:29.019 12:43:34 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.020 12:43:34 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.020 12:43:34 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.020 12:43:34 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:23:29.020 12:43:34 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:29.020 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:29.020 12:43:34 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:23:29.020 12:43:34 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:23:29.020 12:43:34 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:23:29.020 12:43:34 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:23:29.020 12:43:34 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.020 12:43:34 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:29.020 12:43:34 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@456 -- # nvmf_veth_init 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:29.020 Cannot find device "nvmf_init_br" 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@162 -- # true 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:29.020 Cannot find device "nvmf_init_br2" 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@163 -- # true 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:29.020 Cannot find device "nvmf_tgt_br" 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@164 -- # true 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:29.020 Cannot find device "nvmf_tgt_br2" 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@165 -- # true 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:29.020 Cannot find device "nvmf_init_br" 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@166 -- # true 00:23:29.020 12:43:34 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:29.279 Cannot find device "nvmf_init_br2" 00:23:29.279 12:43:34 nvmf_dif -- nvmf/common.sh@167 -- # true 00:23:29.279 12:43:34 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:29.279 Cannot find device "nvmf_tgt_br" 00:23:29.279 12:43:34 nvmf_dif -- nvmf/common.sh@168 -- # true 00:23:29.279 12:43:34 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:29.279 Cannot find device "nvmf_tgt_br2" 00:23:29.279 12:43:34 nvmf_dif -- nvmf/common.sh@169 -- # true 00:23:29.279 12:43:34 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:29.279 Cannot find device "nvmf_br" 00:23:29.279 12:43:34 nvmf_dif -- nvmf/common.sh@170 -- # true 00:23:29.279 12:43:34 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:29.279 Cannot find device "nvmf_init_if" 00:23:29.279 12:43:34 nvmf_dif -- nvmf/common.sh@171 -- # true 00:23:29.279 12:43:34 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:29.279 Cannot find device "nvmf_init_if2" 00:23:29.279 12:43:34 nvmf_dif -- nvmf/common.sh@172 -- # true 00:23:29.279 12:43:34 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:29.279 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:29.279 12:43:34 nvmf_dif -- nvmf/common.sh@173 -- # true 00:23:29.279 12:43:34 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:29.279 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:29.279 12:43:34 nvmf_dif -- nvmf/common.sh@174 -- # true 00:23:29.279 12:43:34 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:29.279 12:43:34 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:29.279 12:43:34 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:29.279 12:43:34 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:29.279 12:43:34 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:29.279 12:43:34 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:29.279 12:43:34 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:29.279 12:43:34 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:29.279 12:43:34 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:29.279 12:43:34 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:29.279 12:43:34 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:29.279 12:43:34 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:29.279 12:43:34 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:29.279 12:43:34 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:29.279 12:43:34 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:29.280 12:43:34 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:29.280 12:43:34 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:29.280 12:43:34 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:29.280 12:43:34 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:29.280 12:43:34 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:29.280 12:43:34 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:29.280 12:43:34 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:29.280 12:43:34 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:29.280 12:43:34 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:29.280 12:43:34 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:29.280 12:43:34 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:29.539 12:43:34 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:29.539 12:43:34 nvmf_dif -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:29.539 12:43:34 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:29.539 12:43:34 nvmf_dif -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:29.539 12:43:34 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:29.539 12:43:34 nvmf_dif -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:29.539 12:43:34 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:29.539 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:29.539 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:23:29.539 00:23:29.539 --- 10.0.0.3 ping statistics --- 00:23:29.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.539 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:23:29.539 12:43:34 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:29.539 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:29.539 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:23:29.539 00:23:29.539 --- 10.0.0.4 ping statistics --- 00:23:29.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.539 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:23:29.539 12:43:34 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:29.539 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:29.539 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:23:29.539 00:23:29.539 --- 10.0.0.1 ping statistics --- 00:23:29.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.539 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:23:29.539 12:43:34 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:29.539 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:29.539 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:23:29.539 00:23:29.539 --- 10.0.0.2 ping statistics --- 00:23:29.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.539 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:23:29.539 12:43:34 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:29.539 12:43:34 nvmf_dif -- nvmf/common.sh@457 -- # return 0 00:23:29.539 12:43:34 nvmf_dif -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:23:29.539 12:43:34 nvmf_dif -- nvmf/common.sh@475 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:29.799 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:29.799 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:29.799 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:29.799 12:43:34 nvmf_dif -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:29.799 12:43:34 nvmf_dif -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:29.799 12:43:34 nvmf_dif -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:29.799 12:43:34 nvmf_dif -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:29.799 12:43:34 nvmf_dif -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:29.799 12:43:34 nvmf_dif -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:29.799 12:43:35 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:23:29.799 12:43:35 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:23:29.799 12:43:35 nvmf_dif -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:29.799 12:43:35 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:29.799 12:43:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:29.799 12:43:35 nvmf_dif -- nvmf/common.sh@505 -- # nvmfpid=98240 00:23:29.799 12:43:35 nvmf_dif -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:29.799 12:43:35 nvmf_dif -- nvmf/common.sh@506 -- # waitforlisten 98240 00:23:29.799 12:43:35 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 98240 ']' 00:23:29.799 12:43:35 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.799 12:43:35 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:29.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.799 12:43:35 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.799 12:43:35 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:29.799 12:43:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:30.058 [2024-11-19 12:43:35.076583] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:23:30.059 [2024-11-19 12:43:35.076689] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:30.059 [2024-11-19 12:43:35.218613] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.059 [2024-11-19 12:43:35.262881] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:30.059 [2024-11-19 12:43:35.262945] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:30.059 [2024-11-19 12:43:35.262960] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:30.059 [2024-11-19 12:43:35.262970] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:30.059 [2024-11-19 12:43:35.262979] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:30.059 [2024-11-19 12:43:35.263012] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.059 [2024-11-19 12:43:35.300413] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:30.318 12:43:35 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:30.318 12:43:35 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:23:30.318 12:43:35 nvmf_dif -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:30.318 12:43:35 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:30.318 12:43:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:30.318 12:43:35 nvmf_dif -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:30.318 12:43:35 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:23:30.318 12:43:35 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:23:30.318 12:43:35 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.318 12:43:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:30.318 [2024-11-19 12:43:35.401178] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:30.318 12:43:35 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.318 12:43:35 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:23:30.318 12:43:35 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:30.318 12:43:35 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:30.318 12:43:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:30.318 ************************************ 00:23:30.318 START TEST fio_dif_1_default 00:23:30.318 ************************************ 00:23:30.318 12:43:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:23:30.318 12:43:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:23:30.318 12:43:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:23:30.318 12:43:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:23:30.318 12:43:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:23:30.318 12:43:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:23:30.318 12:43:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:30.318 12:43:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.318 12:43:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:30.318 bdev_null0 00:23:30.318 12:43:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.318 12:43:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:30.318 12:43:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.318 12:43:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:30.318 12:43:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.318 12:43:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:30.318 12:43:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.318 12:43:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:30.318 12:43:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.318 12:43:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:30.318 12:43:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.318 12:43:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:30.318 [2024-11-19 12:43:35.449318] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:30.319 12:43:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.319 12:43:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:23:30.319 12:43:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:23:30.319 12:43:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:30.319 12:43:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:30.319 12:43:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:23:30.319 12:43:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:30.319 12:43:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:23:30.319 12:43:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:23:30.319 12:43:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:30.319 12:43:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:30.319 12:43:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:30.319 12:43:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:30.319 12:43:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:23:30.319 12:43:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:30.319 12:43:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:30.319 12:43:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # config=() 00:23:30.319 12:43:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # local subsystem config 00:23:30.319 12:43:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:23:30.319 12:43:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:23:30.319 { 00:23:30.319 "params": { 00:23:30.319 "name": "Nvme$subsystem", 00:23:30.319 "trtype": "$TEST_TRANSPORT", 00:23:30.319 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:30.319 "adrfam": "ipv4", 00:23:30.319 "trsvcid": "$NVMF_PORT", 00:23:30.319 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:30.319 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:30.319 "hdgst": ${hdgst:-false}, 00:23:30.319 "ddgst": ${ddgst:-false} 00:23:30.319 }, 00:23:30.319 "method": "bdev_nvme_attach_controller" 00:23:30.319 } 00:23:30.319 EOF 00:23:30.319 )") 00:23:30.319 12:43:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:30.319 12:43:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:23:30.319 12:43:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:23:30.319 12:43:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:23:30.319 12:43:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:30.319 12:43:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # cat 00:23:30.319 12:43:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # jq . 00:23:30.319 12:43:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@581 -- # IFS=, 00:23:30.319 12:43:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:23:30.319 "params": { 00:23:30.319 "name": "Nvme0", 00:23:30.319 "trtype": "tcp", 00:23:30.319 "traddr": "10.0.0.3", 00:23:30.319 "adrfam": "ipv4", 00:23:30.319 "trsvcid": "4420", 00:23:30.319 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:30.319 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:30.319 "hdgst": false, 00:23:30.319 "ddgst": false 00:23:30.319 }, 00:23:30.319 "method": "bdev_nvme_attach_controller" 00:23:30.319 }' 00:23:30.319 12:43:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:30.319 12:43:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:30.319 12:43:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:30.319 12:43:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:30.319 12:43:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:30.319 12:43:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:30.319 12:43:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:30.319 12:43:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:30.319 12:43:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:30.319 12:43:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:30.579 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:30.579 fio-3.35 00:23:30.579 Starting 1 thread 00:23:42.790 00:23:42.790 filename0: (groupid=0, jobs=1): err= 0: pid=98298: Tue Nov 19 12:43:46 2024 00:23:42.790 read: IOPS=10.0k, BW=39.1MiB/s (41.0MB/s)(391MiB/10001msec) 00:23:42.790 slat (nsec): min=5810, max=62877, avg=7433.31, stdev=3103.36 00:23:42.790 clat (usec): min=316, max=5603, avg=377.27, stdev=50.74 00:23:42.790 lat (usec): min=321, max=5641, avg=384.71, stdev=51.34 00:23:42.790 clat percentiles (usec): 00:23:42.790 | 1.00th=[ 322], 5.00th=[ 326], 10.00th=[ 334], 20.00th=[ 347], 00:23:42.790 | 30.00th=[ 355], 40.00th=[ 363], 50.00th=[ 371], 60.00th=[ 379], 00:23:42.790 | 70.00th=[ 392], 80.00th=[ 404], 90.00th=[ 424], 95.00th=[ 449], 00:23:42.790 | 99.00th=[ 494], 99.50th=[ 510], 99.90th=[ 545], 99.95th=[ 562], 00:23:42.790 | 99.99th=[ 1565] 00:23:42.790 bw ( KiB/s): min=37472, max=41120, per=100.00%, avg=40089.26, stdev=887.49, samples=19 00:23:42.790 iops : min= 9368, max=10280, avg=10022.32, stdev=221.87, samples=19 00:23:42.790 lat (usec) : 500=99.17%, 750=0.81% 00:23:42.790 lat (msec) : 2=0.01%, 10=0.01% 00:23:42.790 cpu : usr=85.06%, sys=13.14%, ctx=20, majf=0, minf=0 00:23:42.790 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:42.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.790 issued rwts: total=100196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:42.790 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:42.790 00:23:42.790 Run status group 0 (all jobs): 00:23:42.790 READ: bw=39.1MiB/s (41.0MB/s), 39.1MiB/s-39.1MiB/s (41.0MB/s-41.0MB/s), io=391MiB (410MB), run=10001-10001msec 00:23:42.790 12:43:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:23:42.790 12:43:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:23:42.790 12:43:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:23:42.790 12:43:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:42.790 12:43:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:23:42.790 12:43:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:42.790 12:43:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.790 12:43:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:42.790 12:43:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.790 12:43:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:42.790 12:43:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.790 12:43:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:42.790 12:43:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.790 00:23:42.791 real 0m10.856s 00:23:42.791 user 0m9.027s 00:23:42.791 sys 0m1.575s 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:42.791 ************************************ 00:23:42.791 END TEST fio_dif_1_default 00:23:42.791 ************************************ 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:42.791 12:43:46 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:23:42.791 12:43:46 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:42.791 12:43:46 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:42.791 12:43:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:42.791 ************************************ 00:23:42.791 START TEST fio_dif_1_multi_subsystems 00:23:42.791 ************************************ 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:42.791 bdev_null0 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:42.791 [2024-11-19 12:43:46.352801] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:42.791 bdev_null1 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # config=() 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # local subsystem config 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:23:42.791 { 00:23:42.791 "params": { 00:23:42.791 "name": "Nvme$subsystem", 00:23:42.791 "trtype": "$TEST_TRANSPORT", 00:23:42.791 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.791 "adrfam": "ipv4", 00:23:42.791 "trsvcid": "$NVMF_PORT", 00:23:42.791 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.791 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.791 "hdgst": ${hdgst:-false}, 00:23:42.791 "ddgst": ${ddgst:-false} 00:23:42.791 }, 00:23:42.791 "method": "bdev_nvme_attach_controller" 00:23:42.791 } 00:23:42.791 EOF 00:23:42.791 )") 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:23:42.791 { 00:23:42.791 "params": { 00:23:42.791 "name": "Nvme$subsystem", 00:23:42.791 "trtype": "$TEST_TRANSPORT", 00:23:42.791 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.791 "adrfam": "ipv4", 00:23:42.791 "trsvcid": "$NVMF_PORT", 00:23:42.791 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.791 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.791 "hdgst": ${hdgst:-false}, 00:23:42.791 "ddgst": ${ddgst:-false} 00:23:42.791 }, 00:23:42.791 "method": "bdev_nvme_attach_controller" 00:23:42.791 } 00:23:42.791 EOF 00:23:42.791 )") 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # jq . 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@581 -- # IFS=, 00:23:42.791 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:23:42.791 "params": { 00:23:42.791 "name": "Nvme0", 00:23:42.791 "trtype": "tcp", 00:23:42.791 "traddr": "10.0.0.3", 00:23:42.791 "adrfam": "ipv4", 00:23:42.791 "trsvcid": "4420", 00:23:42.791 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:42.791 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:42.791 "hdgst": false, 00:23:42.791 "ddgst": false 00:23:42.791 }, 00:23:42.791 "method": "bdev_nvme_attach_controller" 00:23:42.791 },{ 00:23:42.791 "params": { 00:23:42.791 "name": "Nvme1", 00:23:42.791 "trtype": "tcp", 00:23:42.791 "traddr": "10.0.0.3", 00:23:42.791 "adrfam": "ipv4", 00:23:42.791 "trsvcid": "4420", 00:23:42.791 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.791 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:42.791 "hdgst": false, 00:23:42.792 "ddgst": false 00:23:42.792 }, 00:23:42.792 "method": "bdev_nvme_attach_controller" 00:23:42.792 }' 00:23:42.792 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:42.792 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:42.792 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:42.792 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:42.792 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:42.792 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:42.792 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:42.792 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:42.792 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:42.792 12:43:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:42.792 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:42.792 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:42.792 fio-3.35 00:23:42.792 Starting 2 threads 00:23:52.823 00:23:52.823 filename0: (groupid=0, jobs=1): err= 0: pid=98459: Tue Nov 19 12:43:57 2024 00:23:52.823 read: IOPS=5293, BW=20.7MiB/s (21.7MB/s)(207MiB/10001msec) 00:23:52.823 slat (nsec): min=6181, max=74910, avg=12362.74, stdev=4547.62 00:23:52.823 clat (usec): min=467, max=1182, avg=721.71, stdev=52.28 00:23:52.823 lat (usec): min=473, max=1208, avg=734.07, stdev=52.81 00:23:52.823 clat percentiles (usec): 00:23:52.823 | 1.00th=[ 635], 5.00th=[ 652], 10.00th=[ 668], 20.00th=[ 676], 00:23:52.823 | 30.00th=[ 693], 40.00th=[ 701], 50.00th=[ 709], 60.00th=[ 725], 00:23:52.823 | 70.00th=[ 742], 80.00th=[ 758], 90.00th=[ 791], 95.00th=[ 824], 00:23:52.823 | 99.00th=[ 881], 99.50th=[ 898], 99.90th=[ 947], 99.95th=[ 963], 00:23:52.823 | 99.99th=[ 1020] 00:23:52.823 bw ( KiB/s): min=20736, max=21515, per=50.02%, avg=21183.30, stdev=208.02, samples=20 00:23:52.823 iops : min= 5184, max= 5378, avg=5295.70, stdev=51.95, samples=20 00:23:52.823 lat (usec) : 500=0.01%, 750=75.30%, 1000=24.68% 00:23:52.823 lat (msec) : 2=0.02% 00:23:52.823 cpu : usr=88.47%, sys=10.11%, ctx=5, majf=0, minf=0 00:23:52.823 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:52.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.823 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.823 issued rwts: total=52944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:52.823 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:52.823 filename1: (groupid=0, jobs=1): err= 0: pid=98460: Tue Nov 19 12:43:57 2024 00:23:52.823 read: IOPS=5293, BW=20.7MiB/s (21.7MB/s)(207MiB/10001msec) 00:23:52.823 slat (usec): min=6, max=115, avg=12.54, stdev= 4.67 00:23:52.823 clat (usec): min=456, max=1214, avg=721.37, stdev=59.87 00:23:52.823 lat (usec): min=469, max=1240, avg=733.91, stdev=61.13 00:23:52.823 clat percentiles (usec): 00:23:52.823 | 1.00th=[ 603], 5.00th=[ 627], 10.00th=[ 652], 20.00th=[ 676], 00:23:52.823 | 30.00th=[ 693], 40.00th=[ 701], 50.00th=[ 717], 60.00th=[ 734], 00:23:52.823 | 70.00th=[ 742], 80.00th=[ 766], 90.00th=[ 799], 95.00th=[ 832], 00:23:52.823 | 99.00th=[ 889], 99.50th=[ 914], 99.90th=[ 971], 99.95th=[ 988], 00:23:52.823 | 99.99th=[ 1020] 00:23:52.823 bw ( KiB/s): min=20736, max=21515, per=50.02%, avg=21183.30, stdev=208.02, samples=20 00:23:52.823 iops : min= 5184, max= 5378, avg=5295.70, stdev=51.95, samples=20 00:23:52.823 lat (usec) : 500=0.01%, 750=72.58%, 1000=27.40% 00:23:52.823 lat (msec) : 2=0.02% 00:23:52.823 cpu : usr=88.47%, sys=10.02%, ctx=9, majf=0, minf=0 00:23:52.823 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:52.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.823 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.823 issued rwts: total=52941,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:52.823 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:52.823 00:23:52.823 Run status group 0 (all jobs): 00:23:52.824 READ: bw=41.4MiB/s (43.4MB/s), 20.7MiB/s-20.7MiB/s (21.7MB/s-21.7MB/s), io=414MiB (434MB), run=10001-10001msec 00:23:52.824 12:43:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:23:52.824 12:43:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:23:52.824 12:43:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:52.824 12:43:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:52.824 12:43:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:23:52.824 12:43:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:52.824 12:43:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.824 12:43:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:52.824 12:43:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.824 12:43:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:52.824 12:43:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.824 12:43:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:52.824 12:43:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.824 12:43:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:52.824 12:43:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:52.824 12:43:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:23:52.824 12:43:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:52.824 12:43:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.824 12:43:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:52.824 12:43:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.824 12:43:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:52.824 12:43:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.824 12:43:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:52.824 12:43:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.824 00:23:52.824 real 0m10.995s 00:23:52.824 user 0m18.379s 00:23:52.824 sys 0m2.244s 00:23:52.824 12:43:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:52.824 12:43:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:52.824 ************************************ 00:23:52.824 END TEST fio_dif_1_multi_subsystems 00:23:52.824 ************************************ 00:23:52.824 12:43:57 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:23:52.824 12:43:57 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:52.824 12:43:57 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:52.824 12:43:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:52.824 ************************************ 00:23:52.824 START TEST fio_dif_rand_params 00:23:52.824 ************************************ 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:52.824 bdev_null0 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:52.824 [2024-11-19 12:43:57.402241] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:23:52.824 { 00:23:52.824 "params": { 00:23:52.824 "name": "Nvme$subsystem", 00:23:52.824 "trtype": "$TEST_TRANSPORT", 00:23:52.824 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:52.824 "adrfam": "ipv4", 00:23:52.824 "trsvcid": "$NVMF_PORT", 00:23:52.824 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:52.824 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:52.824 "hdgst": ${hdgst:-false}, 00:23:52.824 "ddgst": ${ddgst:-false} 00:23:52.824 }, 00:23:52.824 "method": "bdev_nvme_attach_controller" 00:23:52.824 } 00:23:52.824 EOF 00:23:52.824 )") 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:23:52.824 "params": { 00:23:52.824 "name": "Nvme0", 00:23:52.824 "trtype": "tcp", 00:23:52.824 "traddr": "10.0.0.3", 00:23:52.824 "adrfam": "ipv4", 00:23:52.824 "trsvcid": "4420", 00:23:52.824 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:52.824 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:52.824 "hdgst": false, 00:23:52.824 "ddgst": false 00:23:52.824 }, 00:23:52.824 "method": "bdev_nvme_attach_controller" 00:23:52.824 }' 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:52.824 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:52.825 12:43:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:52.825 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:23:52.825 ... 00:23:52.825 fio-3.35 00:23:52.825 Starting 3 threads 00:23:58.099 00:23:58.099 filename0: (groupid=0, jobs=1): err= 0: pid=98609: Tue Nov 19 12:44:03 2024 00:23:58.099 read: IOPS=285, BW=35.7MiB/s (37.5MB/s)(179MiB/5005msec) 00:23:58.099 slat (nsec): min=6906, max=42570, avg=13342.09, stdev=3999.67 00:23:58.099 clat (usec): min=8826, max=15482, avg=10462.33, stdev=465.99 00:23:58.099 lat (usec): min=8839, max=15513, avg=10475.67, stdev=466.58 00:23:58.099 clat percentiles (usec): 00:23:58.099 | 1.00th=[10028], 5.00th=[10159], 10.00th=[10159], 20.00th=[10159], 00:23:58.099 | 30.00th=[10290], 40.00th=[10290], 50.00th=[10290], 60.00th=[10421], 00:23:58.099 | 70.00th=[10421], 80.00th=[10683], 90.00th=[10945], 95.00th=[11207], 00:23:58.099 | 99.00th=[11600], 99.50th=[14222], 99.90th=[15401], 99.95th=[15533], 00:23:58.099 | 99.99th=[15533] 00:23:58.099 bw ( KiB/s): min=36096, max=36864, per=33.32%, avg=36556.80, stdev=396.59, samples=10 00:23:58.099 iops : min= 282, max= 288, avg=285.60, stdev= 3.10, samples=10 00:23:58.099 lat (msec) : 10=0.35%, 20=99.65% 00:23:58.099 cpu : usr=90.95%, sys=8.47%, ctx=5, majf=0, minf=0 00:23:58.099 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:58.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.099 issued rwts: total=1431,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:58.099 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:58.099 filename0: (groupid=0, jobs=1): err= 0: pid=98610: Tue Nov 19 12:44:03 2024 00:23:58.099 read: IOPS=285, BW=35.7MiB/s (37.4MB/s)(179MiB/5009msec) 00:23:58.099 slat (nsec): min=3204, max=56557, avg=9330.33, stdev=4362.77 00:23:58.099 clat (usec): min=9992, max=15628, avg=10477.24, stdev=497.91 00:23:58.099 lat (usec): min=9999, max=15641, avg=10486.57, stdev=498.08 00:23:58.099 clat percentiles (usec): 00:23:58.099 | 1.00th=[10028], 5.00th=[10159], 10.00th=[10159], 20.00th=[10290], 00:23:58.099 | 30.00th=[10290], 40.00th=[10290], 50.00th=[10290], 60.00th=[10421], 00:23:58.099 | 70.00th=[10421], 80.00th=[10683], 90.00th=[10945], 95.00th=[11207], 00:23:58.099 | 99.00th=[11863], 99.50th=[14484], 99.90th=[15664], 99.95th=[15664], 00:23:58.099 | 99.99th=[15664] 00:23:58.099 bw ( KiB/s): min=36096, max=37632, per=33.32%, avg=36556.80, stdev=536.99, samples=10 00:23:58.099 iops : min= 282, max= 294, avg=285.60, stdev= 4.20, samples=10 00:23:58.099 lat (msec) : 10=0.14%, 20=99.86% 00:23:58.099 cpu : usr=90.81%, sys=8.57%, ctx=9, majf=0, minf=0 00:23:58.099 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:58.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.099 issued rwts: total=1431,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:58.099 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:58.099 filename0: (groupid=0, jobs=1): err= 0: pid=98611: Tue Nov 19 12:44:03 2024 00:23:58.099 read: IOPS=285, BW=35.7MiB/s (37.5MB/s)(179MiB/5005msec) 00:23:58.099 slat (usec): min=6, max=104, avg=14.09, stdev= 4.80 00:23:58.099 clat (usec): min=8826, max=15469, avg=10459.83, stdev=465.33 00:23:58.099 lat (usec): min=8838, max=15518, avg=10473.92, stdev=466.11 00:23:58.099 clat percentiles (usec): 00:23:58.099 | 1.00th=[10028], 5.00th=[10159], 10.00th=[10159], 20.00th=[10159], 00:23:58.099 | 30.00th=[10290], 40.00th=[10290], 50.00th=[10290], 60.00th=[10421], 00:23:58.099 | 70.00th=[10421], 80.00th=[10683], 90.00th=[10945], 95.00th=[11207], 00:23:58.099 | 99.00th=[11600], 99.50th=[14222], 99.90th=[15401], 99.95th=[15533], 00:23:58.099 | 99.99th=[15533] 00:23:58.099 bw ( KiB/s): min=36096, max=36864, per=33.32%, avg=36556.80, stdev=396.59, samples=10 00:23:58.099 iops : min= 282, max= 288, avg=285.60, stdev= 3.10, samples=10 00:23:58.099 lat (msec) : 10=0.21%, 20=99.79% 00:23:58.099 cpu : usr=90.87%, sys=8.51%, ctx=44, majf=0, minf=0 00:23:58.099 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:58.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.099 issued rwts: total=1431,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:58.099 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:58.099 00:23:58.099 Run status group 0 (all jobs): 00:23:58.099 READ: bw=107MiB/s (112MB/s), 35.7MiB/s-35.7MiB/s (37.4MB/s-37.5MB/s), io=537MiB (563MB), run=5005-5009msec 00:23:58.099 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:23:58.099 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:58.099 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:58.099 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:58.099 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:58.099 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:58.099 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.099 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:58.099 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.099 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:58.099 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.099 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:58.099 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.099 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:23:58.099 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:23:58.099 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:23:58.099 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:23:58.099 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:23:58.099 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:23:58.099 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:23:58.099 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:58.099 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:58.099 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:58.099 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:58.099 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:23:58.099 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.099 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:58.099 bdev_null0 00:23:58.099 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:58.100 [2024-11-19 12:44:03.268749] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:58.100 bdev_null1 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:58.100 bdev_null2 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:23:58.100 { 00:23:58.100 "params": { 00:23:58.100 "name": "Nvme$subsystem", 00:23:58.100 "trtype": "$TEST_TRANSPORT", 00:23:58.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.100 "adrfam": "ipv4", 00:23:58.100 "trsvcid": "$NVMF_PORT", 00:23:58.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.100 "hdgst": ${hdgst:-false}, 00:23:58.100 "ddgst": ${ddgst:-false} 00:23:58.100 }, 00:23:58.100 "method": "bdev_nvme_attach_controller" 00:23:58.100 } 00:23:58.100 EOF 00:23:58.100 )") 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:23:58.100 { 00:23:58.100 "params": { 00:23:58.100 "name": "Nvme$subsystem", 00:23:58.100 "trtype": "$TEST_TRANSPORT", 00:23:58.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.100 "adrfam": "ipv4", 00:23:58.100 "trsvcid": "$NVMF_PORT", 00:23:58.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.100 "hdgst": ${hdgst:-false}, 00:23:58.100 "ddgst": ${ddgst:-false} 00:23:58.100 }, 00:23:58.100 "method": "bdev_nvme_attach_controller" 00:23:58.100 } 00:23:58.100 EOF 00:23:58.100 )") 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:58.100 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:58.360 12:44:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:23:58.360 12:44:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:23:58.360 { 00:23:58.360 "params": { 00:23:58.360 "name": "Nvme$subsystem", 00:23:58.360 "trtype": "$TEST_TRANSPORT", 00:23:58.360 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.360 "adrfam": "ipv4", 00:23:58.360 "trsvcid": "$NVMF_PORT", 00:23:58.360 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.360 "hdgst": ${hdgst:-false}, 00:23:58.360 "ddgst": ${ddgst:-false} 00:23:58.360 }, 00:23:58.360 "method": "bdev_nvme_attach_controller" 00:23:58.360 } 00:23:58.360 EOF 00:23:58.360 )") 00:23:58.360 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:58.360 12:44:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:58.360 12:44:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:23:58.360 12:44:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:23:58.360 12:44:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:23:58.360 12:44:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:23:58.360 "params": { 00:23:58.360 "name": "Nvme0", 00:23:58.360 "trtype": "tcp", 00:23:58.360 "traddr": "10.0.0.3", 00:23:58.360 "adrfam": "ipv4", 00:23:58.360 "trsvcid": "4420", 00:23:58.360 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:58.360 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:58.360 "hdgst": false, 00:23:58.360 "ddgst": false 00:23:58.360 }, 00:23:58.360 "method": "bdev_nvme_attach_controller" 00:23:58.360 },{ 00:23:58.360 "params": { 00:23:58.360 "name": "Nvme1", 00:23:58.360 "trtype": "tcp", 00:23:58.360 "traddr": "10.0.0.3", 00:23:58.360 "adrfam": "ipv4", 00:23:58.360 "trsvcid": "4420", 00:23:58.360 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.360 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:58.360 "hdgst": false, 00:23:58.360 "ddgst": false 00:23:58.360 }, 00:23:58.360 "method": "bdev_nvme_attach_controller" 00:23:58.360 },{ 00:23:58.360 "params": { 00:23:58.360 "name": "Nvme2", 00:23:58.360 "trtype": "tcp", 00:23:58.360 "traddr": "10.0.0.3", 00:23:58.360 "adrfam": "ipv4", 00:23:58.360 "trsvcid": "4420", 00:23:58.360 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:58.360 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:58.360 "hdgst": false, 00:23:58.360 "ddgst": false 00:23:58.360 }, 00:23:58.360 "method": "bdev_nvme_attach_controller" 00:23:58.360 }' 00:23:58.360 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:58.360 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:58.360 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:58.360 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:58.360 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:58.360 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:58.360 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:58.360 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:58.360 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:58.360 12:44:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:58.360 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:58.360 ... 00:23:58.360 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:58.360 ... 00:23:58.360 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:58.360 ... 00:23:58.360 fio-3.35 00:23:58.360 Starting 24 threads 00:24:10.576 00:24:10.576 filename0: (groupid=0, jobs=1): err= 0: pid=98702: Tue Nov 19 12:44:14 2024 00:24:10.576 read: IOPS=173, BW=695KiB/s (712kB/s)(6968KiB/10026msec) 00:24:10.576 slat (usec): min=3, max=8026, avg=32.98, stdev=383.50 00:24:10.576 clat (msec): min=40, max=156, avg=91.94, stdev=21.44 00:24:10.576 lat (msec): min=40, max=156, avg=91.97, stdev=21.44 00:24:10.576 clat percentiles (msec): 00:24:10.576 | 1.00th=[ 60], 5.00th=[ 61], 10.00th=[ 70], 20.00th=[ 72], 00:24:10.576 | 30.00th=[ 72], 40.00th=[ 82], 50.00th=[ 85], 60.00th=[ 108], 00:24:10.576 | 70.00th=[ 108], 80.00th=[ 111], 90.00th=[ 121], 95.00th=[ 121], 00:24:10.576 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 157], 00:24:10.576 | 99.99th=[ 157] 00:24:10.576 bw ( KiB/s): min= 616, max= 880, per=4.14%, avg=690.30, stdev=60.23, samples=20 00:24:10.576 iops : min= 154, max= 220, avg=172.55, stdev=15.05, samples=20 00:24:10.576 lat (msec) : 50=0.52%, 100=54.88%, 250=44.60% 00:24:10.576 cpu : usr=31.21%, sys=1.98%, ctx=881, majf=0, minf=9 00:24:10.576 IO depths : 1=0.1%, 2=1.4%, 4=5.7%, 8=77.8%, 16=15.0%, 32=0.0%, >=64=0.0% 00:24:10.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.576 complete : 0=0.0%, 4=88.4%, 8=10.4%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.576 issued rwts: total=1742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.576 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:10.576 filename0: (groupid=0, jobs=1): err= 0: pid=98703: Tue Nov 19 12:44:14 2024 00:24:10.576 read: IOPS=152, BW=610KiB/s (624kB/s)(6136KiB/10067msec) 00:24:10.576 slat (usec): min=4, max=4023, avg=21.03, stdev=167.51 00:24:10.576 clat (msec): min=5, max=168, avg=104.79, stdev=29.25 00:24:10.576 lat (msec): min=5, max=168, avg=104.81, stdev=29.25 00:24:10.576 clat percentiles (msec): 00:24:10.576 | 1.00th=[ 6], 5.00th=[ 55], 10.00th=[ 66], 20.00th=[ 75], 00:24:10.576 | 30.00th=[ 105], 40.00th=[ 110], 50.00th=[ 111], 60.00th=[ 113], 00:24:10.576 | 70.00th=[ 117], 80.00th=[ 120], 90.00th=[ 142], 95.00th=[ 148], 00:24:10.576 | 99.00th=[ 155], 99.50th=[ 161], 99.90th=[ 169], 99.95th=[ 169], 00:24:10.576 | 99.99th=[ 169] 00:24:10.576 bw ( KiB/s): min= 512, max= 1280, per=3.64%, avg=607.20, stdev=187.49, samples=20 00:24:10.576 iops : min= 128, max= 320, avg=151.80, stdev=46.87, samples=20 00:24:10.576 lat (msec) : 10=2.09%, 50=2.09%, 100=23.01%, 250=72.82% 00:24:10.576 cpu : usr=45.81%, sys=2.58%, ctx=1322, majf=0, minf=9 00:24:10.576 IO depths : 1=0.1%, 2=6.3%, 4=25.0%, 8=56.2%, 16=12.4%, 32=0.0%, >=64=0.0% 00:24:10.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.576 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.576 issued rwts: total=1534,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.576 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:10.576 filename0: (groupid=0, jobs=1): err= 0: pid=98704: Tue Nov 19 12:44:14 2024 00:24:10.576 read: IOPS=182, BW=728KiB/s (746kB/s)(7312KiB/10040msec) 00:24:10.576 slat (usec): min=3, max=7020, avg=25.99, stdev=228.03 00:24:10.576 clat (msec): min=34, max=123, avg=87.66, stdev=21.87 00:24:10.576 lat (msec): min=34, max=123, avg=87.69, stdev=21.87 00:24:10.576 clat percentiles (msec): 00:24:10.576 | 1.00th=[ 38], 5.00th=[ 55], 10.00th=[ 62], 20.00th=[ 71], 00:24:10.576 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 84], 60.00th=[ 99], 00:24:10.576 | 70.00th=[ 108], 80.00th=[ 111], 90.00th=[ 117], 95.00th=[ 120], 00:24:10.576 | 99.00th=[ 122], 99.50th=[ 123], 99.90th=[ 124], 99.95th=[ 125], 00:24:10.576 | 99.99th=[ 125] 00:24:10.576 bw ( KiB/s): min= 664, max= 962, per=4.36%, avg=726.70, stdev=86.51, samples=20 00:24:10.576 iops : min= 166, max= 240, avg=181.60, stdev=21.55, samples=20 00:24:10.576 lat (msec) : 50=4.05%, 100=56.78%, 250=39.17% 00:24:10.576 cpu : usr=41.47%, sys=2.41%, ctx=1239, majf=0, minf=9 00:24:10.576 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.5%, 16=15.7%, 32=0.0%, >=64=0.0% 00:24:10.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.576 complete : 0=0.0%, 4=86.8%, 8=13.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.576 issued rwts: total=1828,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.576 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:10.576 filename0: (groupid=0, jobs=1): err= 0: pid=98705: Tue Nov 19 12:44:14 2024 00:24:10.576 read: IOPS=161, BW=646KiB/s (662kB/s)(6496KiB/10048msec) 00:24:10.576 slat (usec): min=4, max=8026, avg=28.84, stdev=338.72 00:24:10.576 clat (msec): min=49, max=164, avg=98.57, stdev=22.03 00:24:10.576 lat (msec): min=49, max=164, avg=98.59, stdev=22.04 00:24:10.576 clat percentiles (msec): 00:24:10.576 | 1.00th=[ 61], 5.00th=[ 66], 10.00th=[ 70], 20.00th=[ 72], 00:24:10.576 | 30.00th=[ 82], 40.00th=[ 97], 50.00th=[ 107], 60.00th=[ 109], 00:24:10.576 | 70.00th=[ 112], 80.00th=[ 118], 90.00th=[ 121], 95.00th=[ 133], 00:24:10.576 | 99.00th=[ 144], 99.50th=[ 148], 99.90th=[ 165], 99.95th=[ 165], 00:24:10.576 | 99.99th=[ 165] 00:24:10.576 bw ( KiB/s): min= 512, max= 896, per=3.88%, avg=646.00, stdev=99.81, samples=20 00:24:10.576 iops : min= 128, max= 224, avg=161.50, stdev=24.95, samples=20 00:24:10.576 lat (msec) : 50=0.86%, 100=41.63%, 250=57.51% 00:24:10.576 cpu : usr=35.81%, sys=2.21%, ctx=1049, majf=0, minf=9 00:24:10.576 IO depths : 1=0.1%, 2=3.3%, 4=13.0%, 8=69.5%, 16=14.2%, 32=0.0%, >=64=0.0% 00:24:10.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.576 complete : 0=0.0%, 4=90.8%, 8=6.4%, 16=2.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.576 issued rwts: total=1624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.576 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:10.576 filename0: (groupid=0, jobs=1): err= 0: pid=98706: Tue Nov 19 12:44:14 2024 00:24:10.576 read: IOPS=163, BW=655KiB/s (671kB/s)(6572KiB/10027msec) 00:24:10.576 slat (nsec): min=8404, max=32553, avg=14128.41, stdev=4699.82 00:24:10.576 clat (msec): min=41, max=149, avg=97.56, stdev=21.01 00:24:10.576 lat (msec): min=41, max=149, avg=97.58, stdev=21.01 00:24:10.576 clat percentiles (msec): 00:24:10.576 | 1.00th=[ 60], 5.00th=[ 65], 10.00th=[ 72], 20.00th=[ 72], 00:24:10.576 | 30.00th=[ 84], 40.00th=[ 96], 50.00th=[ 108], 60.00th=[ 108], 00:24:10.576 | 70.00th=[ 109], 80.00th=[ 121], 90.00th=[ 121], 95.00th=[ 121], 00:24:10.576 | 99.00th=[ 144], 99.50th=[ 150], 99.90th=[ 150], 99.95th=[ 150], 00:24:10.576 | 99.99th=[ 150] 00:24:10.576 bw ( KiB/s): min= 512, max= 897, per=3.90%, avg=650.55, stdev=83.96, samples=20 00:24:10.576 iops : min= 128, max= 224, avg=162.60, stdev=20.96, samples=20 00:24:10.576 lat (msec) : 50=0.49%, 100=41.57%, 250=57.94% 00:24:10.576 cpu : usr=32.43%, sys=2.12%, ctx=888, majf=0, minf=9 00:24:10.576 IO depths : 1=0.1%, 2=1.6%, 4=6.3%, 8=76.1%, 16=16.0%, 32=0.0%, >=64=0.0% 00:24:10.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.576 complete : 0=0.0%, 4=89.6%, 8=9.0%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.576 issued rwts: total=1643,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.576 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:10.576 filename0: (groupid=0, jobs=1): err= 0: pid=98707: Tue Nov 19 12:44:14 2024 00:24:10.576 read: IOPS=177, BW=709KiB/s (726kB/s)(7128KiB/10055msec) 00:24:10.576 slat (usec): min=7, max=8034, avg=29.74, stdev=326.00 00:24:10.576 clat (msec): min=31, max=154, avg=89.99, stdev=23.05 00:24:10.576 lat (msec): min=31, max=154, avg=90.02, stdev=23.05 00:24:10.576 clat percentiles (msec): 00:24:10.576 | 1.00th=[ 34], 5.00th=[ 49], 10.00th=[ 63], 20.00th=[ 71], 00:24:10.577 | 30.00th=[ 73], 40.00th=[ 81], 50.00th=[ 93], 60.00th=[ 105], 00:24:10.577 | 70.00th=[ 108], 80.00th=[ 111], 90.00th=[ 118], 95.00th=[ 121], 00:24:10.577 | 99.00th=[ 128], 99.50th=[ 128], 99.90th=[ 153], 99.95th=[ 155], 00:24:10.577 | 99.99th=[ 155] 00:24:10.577 bw ( KiB/s): min= 584, max= 1040, per=4.24%, avg=706.40, stdev=119.99, samples=20 00:24:10.577 iops : min= 146, max= 260, avg=176.60, stdev=30.00, samples=20 00:24:10.577 lat (msec) : 50=5.67%, 100=49.05%, 250=45.29% 00:24:10.577 cpu : usr=40.12%, sys=2.13%, ctx=1304, majf=0, minf=9 00:24:10.577 IO depths : 1=0.1%, 2=0.5%, 4=2.0%, 8=81.4%, 16=16.0%, 32=0.0%, >=64=0.0% 00:24:10.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.577 complete : 0=0.0%, 4=87.7%, 8=11.8%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.577 issued rwts: total=1782,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.577 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:10.577 filename0: (groupid=0, jobs=1): err= 0: pid=98708: Tue Nov 19 12:44:14 2024 00:24:10.577 read: IOPS=174, BW=699KiB/s (716kB/s)(7032KiB/10055msec) 00:24:10.577 slat (usec): min=7, max=11030, avg=27.04, stdev=338.49 00:24:10.577 clat (msec): min=26, max=155, avg=91.20, stdev=24.06 00:24:10.577 lat (msec): min=26, max=155, avg=91.23, stdev=24.05 00:24:10.577 clat percentiles (msec): 00:24:10.577 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 61], 20.00th=[ 71], 00:24:10.577 | 30.00th=[ 72], 40.00th=[ 84], 50.00th=[ 102], 60.00th=[ 107], 00:24:10.577 | 70.00th=[ 108], 80.00th=[ 112], 90.00th=[ 120], 95.00th=[ 121], 00:24:10.577 | 99.00th=[ 128], 99.50th=[ 132], 99.90th=[ 155], 99.95th=[ 157], 00:24:10.577 | 99.99th=[ 157] 00:24:10.577 bw ( KiB/s): min= 584, max= 1088, per=4.18%, avg=696.80, stdev=138.66, samples=20 00:24:10.577 iops : min= 146, max= 272, avg=174.20, stdev=34.66, samples=20 00:24:10.577 lat (msec) : 50=6.26%, 100=43.46%, 250=50.28% 00:24:10.577 cpu : usr=34.68%, sys=2.24%, ctx=1014, majf=0, minf=9 00:24:10.577 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=82.3%, 16=16.6%, 32=0.0%, >=64=0.0% 00:24:10.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.577 complete : 0=0.0%, 4=87.7%, 8=12.1%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.577 issued rwts: total=1758,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.577 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:10.577 filename0: (groupid=0, jobs=1): err= 0: pid=98709: Tue Nov 19 12:44:14 2024 00:24:10.577 read: IOPS=167, BW=669KiB/s (685kB/s)(6720KiB/10041msec) 00:24:10.577 slat (usec): min=8, max=4023, avg=16.74, stdev=97.96 00:24:10.577 clat (msec): min=48, max=159, avg=95.38, stdev=21.88 00:24:10.577 lat (msec): min=48, max=159, avg=95.40, stdev=21.88 00:24:10.577 clat percentiles (msec): 00:24:10.577 | 1.00th=[ 62], 5.00th=[ 67], 10.00th=[ 69], 20.00th=[ 72], 00:24:10.577 | 30.00th=[ 77], 40.00th=[ 86], 50.00th=[ 100], 60.00th=[ 107], 00:24:10.577 | 70.00th=[ 111], 80.00th=[ 115], 90.00th=[ 121], 95.00th=[ 129], 00:24:10.577 | 99.00th=[ 144], 99.50th=[ 146], 99.90th=[ 159], 99.95th=[ 159], 00:24:10.577 | 99.99th=[ 159] 00:24:10.577 bw ( KiB/s): min= 512, max= 874, per=4.00%, avg=667.85, stdev=77.98, samples=20 00:24:10.577 iops : min= 128, max= 218, avg=166.90, stdev=19.42, samples=20 00:24:10.577 lat (msec) : 50=0.18%, 100=50.89%, 250=48.93% 00:24:10.577 cpu : usr=35.60%, sys=2.16%, ctx=1698, majf=0, minf=9 00:24:10.577 IO depths : 1=0.1%, 2=1.8%, 4=7.1%, 8=76.1%, 16=15.0%, 32=0.0%, >=64=0.0% 00:24:10.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.577 complete : 0=0.0%, 4=88.9%, 8=9.5%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.577 issued rwts: total=1680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.577 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:10.577 filename1: (groupid=0, jobs=1): err= 0: pid=98710: Tue Nov 19 12:44:14 2024 00:24:10.577 read: IOPS=176, BW=707KiB/s (724kB/s)(7120KiB/10074msec) 00:24:10.577 slat (nsec): min=4872, max=42531, avg=13434.71, stdev=5123.62 00:24:10.577 clat (msec): min=2, max=150, avg=90.36, stdev=27.21 00:24:10.577 lat (msec): min=2, max=150, avg=90.37, stdev=27.21 00:24:10.577 clat percentiles (msec): 00:24:10.577 | 1.00th=[ 5], 5.00th=[ 35], 10.00th=[ 61], 20.00th=[ 72], 00:24:10.577 | 30.00th=[ 74], 40.00th=[ 84], 50.00th=[ 99], 60.00th=[ 106], 00:24:10.577 | 70.00th=[ 109], 80.00th=[ 112], 90.00th=[ 118], 95.00th=[ 121], 00:24:10.577 | 99.00th=[ 126], 99.50th=[ 134], 99.90th=[ 148], 99.95th=[ 150], 00:24:10.577 | 99.99th=[ 150] 00:24:10.577 bw ( KiB/s): min= 584, max= 1536, per=4.23%, avg=705.60, stdev=204.68, samples=20 00:24:10.577 iops : min= 146, max= 384, avg=176.40, stdev=51.17, samples=20 00:24:10.577 lat (msec) : 4=0.84%, 10=2.75%, 50=2.98%, 100=44.49%, 250=48.93% 00:24:10.577 cpu : usr=39.58%, sys=2.03%, ctx=1212, majf=0, minf=0 00:24:10.577 IO depths : 1=0.2%, 2=1.0%, 4=3.9%, 8=78.8%, 16=16.1%, 32=0.0%, >=64=0.0% 00:24:10.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.577 complete : 0=0.0%, 4=88.6%, 8=10.5%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.577 issued rwts: total=1780,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.577 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:10.577 filename1: (groupid=0, jobs=1): err= 0: pid=98711: Tue Nov 19 12:44:14 2024 00:24:10.577 read: IOPS=179, BW=717KiB/s (734kB/s)(7176KiB/10014msec) 00:24:10.577 slat (usec): min=8, max=8025, avg=25.33, stdev=270.54 00:24:10.577 clat (msec): min=34, max=135, avg=89.19, stdev=21.60 00:24:10.577 lat (msec): min=34, max=135, avg=89.21, stdev=21.60 00:24:10.577 clat percentiles (msec): 00:24:10.577 | 1.00th=[ 42], 5.00th=[ 57], 10.00th=[ 64], 20.00th=[ 70], 00:24:10.577 | 30.00th=[ 73], 40.00th=[ 79], 50.00th=[ 85], 60.00th=[ 103], 00:24:10.577 | 70.00th=[ 107], 80.00th=[ 112], 90.00th=[ 117], 95.00th=[ 121], 00:24:10.577 | 99.00th=[ 125], 99.50th=[ 126], 99.90th=[ 132], 99.95th=[ 136], 00:24:10.577 | 99.99th=[ 136] 00:24:10.577 bw ( KiB/s): min= 637, max= 936, per=4.27%, avg=711.05, stdev=76.97, samples=20 00:24:10.577 iops : min= 159, max= 234, avg=177.75, stdev=19.25, samples=20 00:24:10.577 lat (msec) : 50=3.07%, 100=55.24%, 250=41.69% 00:24:10.577 cpu : usr=35.89%, sys=1.79%, ctx=1647, majf=0, minf=9 00:24:10.577 IO depths : 1=0.1%, 2=0.4%, 4=1.7%, 8=82.2%, 16=15.6%, 32=0.0%, >=64=0.0% 00:24:10.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.577 complete : 0=0.0%, 4=87.2%, 8=12.4%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.577 issued rwts: total=1794,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.577 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:10.577 filename1: (groupid=0, jobs=1): err= 0: pid=98712: Tue Nov 19 12:44:14 2024 00:24:10.577 read: IOPS=187, BW=751KiB/s (769kB/s)(7508KiB/10002msec) 00:24:10.577 slat (usec): min=6, max=4026, avg=25.33, stdev=201.00 00:24:10.577 clat (usec): min=1635, max=140466, avg=85135.36, stdev=25837.73 00:24:10.577 lat (usec): min=1642, max=140492, avg=85160.68, stdev=25834.55 00:24:10.577 clat percentiles (msec): 00:24:10.577 | 1.00th=[ 3], 5.00th=[ 44], 10.00th=[ 57], 20.00th=[ 68], 00:24:10.577 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 83], 60.00th=[ 96], 00:24:10.577 | 70.00th=[ 107], 80.00th=[ 111], 90.00th=[ 117], 95.00th=[ 120], 00:24:10.577 | 99.00th=[ 124], 99.50th=[ 127], 99.90th=[ 140], 99.95th=[ 140], 00:24:10.577 | 99.99th=[ 140] 00:24:10.577 bw ( KiB/s): min= 664, max= 1024, per=4.37%, avg=728.42, stdev=92.62, samples=19 00:24:10.577 iops : min= 166, max= 256, avg=182.11, stdev=23.16, samples=19 00:24:10.577 lat (msec) : 2=0.85%, 4=1.23%, 10=0.48%, 50=5.33%, 100=54.50% 00:24:10.577 lat (msec) : 250=37.61% 00:24:10.577 cpu : usr=41.58%, sys=2.52%, ctx=1342, majf=0, minf=9 00:24:10.577 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.5%, 16=15.7%, 32=0.0%, >=64=0.0% 00:24:10.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.577 complete : 0=0.0%, 4=86.8%, 8=13.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.577 issued rwts: total=1877,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.577 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:10.577 filename1: (groupid=0, jobs=1): err= 0: pid=98713: Tue Nov 19 12:44:14 2024 00:24:10.577 read: IOPS=181, BW=726KiB/s (743kB/s)(7288KiB/10040msec) 00:24:10.577 slat (usec): min=8, max=4025, avg=21.52, stdev=155.34 00:24:10.577 clat (msec): min=34, max=141, avg=88.04, stdev=22.05 00:24:10.577 lat (msec): min=34, max=141, avg=88.06, stdev=22.05 00:24:10.577 clat percentiles (msec): 00:24:10.577 | 1.00th=[ 39], 5.00th=[ 52], 10.00th=[ 63], 20.00th=[ 70], 00:24:10.577 | 30.00th=[ 72], 40.00th=[ 78], 50.00th=[ 84], 60.00th=[ 102], 00:24:10.577 | 70.00th=[ 107], 80.00th=[ 111], 90.00th=[ 117], 95.00th=[ 121], 00:24:10.577 | 99.00th=[ 124], 99.50th=[ 125], 99.90th=[ 130], 99.95th=[ 142], 00:24:10.577 | 99.99th=[ 142] 00:24:10.577 bw ( KiB/s): min= 640, max= 992, per=4.33%, avg=722.40, stdev=91.92, samples=20 00:24:10.577 iops : min= 160, max= 248, avg=180.60, stdev=22.98, samples=20 00:24:10.577 lat (msec) : 50=4.67%, 100=55.05%, 250=40.29% 00:24:10.577 cpu : usr=40.58%, sys=2.51%, ctx=1348, majf=0, minf=9 00:24:10.577 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.4%, 16=15.8%, 32=0.0%, >=64=0.0% 00:24:10.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.577 complete : 0=0.0%, 4=86.9%, 8=12.9%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.577 issued rwts: total=1822,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.577 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:10.577 filename1: (groupid=0, jobs=1): err= 0: pid=98714: Tue Nov 19 12:44:14 2024 00:24:10.577 read: IOPS=175, BW=701KiB/s (718kB/s)(7040KiB/10036msec) 00:24:10.577 slat (usec): min=4, max=8027, avg=21.59, stdev=213.57 00:24:10.577 clat (msec): min=35, max=147, avg=91.13, stdev=21.55 00:24:10.577 lat (msec): min=35, max=147, avg=91.15, stdev=21.55 00:24:10.577 clat percentiles (msec): 00:24:10.577 | 1.00th=[ 46], 5.00th=[ 50], 10.00th=[ 64], 20.00th=[ 72], 00:24:10.577 | 30.00th=[ 74], 40.00th=[ 83], 50.00th=[ 96], 60.00th=[ 105], 00:24:10.577 | 70.00th=[ 108], 80.00th=[ 112], 90.00th=[ 118], 95.00th=[ 121], 00:24:10.577 | 99.00th=[ 124], 99.50th=[ 127], 99.90th=[ 144], 99.95th=[ 148], 00:24:10.577 | 99.99th=[ 148] 00:24:10.577 bw ( KiB/s): min= 608, max= 968, per=4.18%, avg=697.65, stdev=86.38, samples=20 00:24:10.577 iops : min= 152, max= 242, avg=174.40, stdev=21.60, samples=20 00:24:10.577 lat (msec) : 50=5.06%, 100=50.80%, 250=44.15% 00:24:10.577 cpu : usr=38.60%, sys=2.17%, ctx=1107, majf=0, minf=9 00:24:10.577 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=82.3%, 16=16.0%, 32=0.0%, >=64=0.0% 00:24:10.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.578 complete : 0=0.0%, 4=87.4%, 8=12.3%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.578 issued rwts: total=1760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.578 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:10.578 filename1: (groupid=0, jobs=1): err= 0: pid=98715: Tue Nov 19 12:44:14 2024 00:24:10.578 read: IOPS=180, BW=722KiB/s (739kB/s)(7228KiB/10013msec) 00:24:10.578 slat (usec): min=8, max=4838, avg=26.77, stdev=212.27 00:24:10.578 clat (msec): min=38, max=130, avg=88.55, stdev=21.47 00:24:10.578 lat (msec): min=38, max=130, avg=88.57, stdev=21.48 00:24:10.578 clat percentiles (msec): 00:24:10.578 | 1.00th=[ 43], 5.00th=[ 55], 10.00th=[ 64], 20.00th=[ 71], 00:24:10.578 | 30.00th=[ 73], 40.00th=[ 78], 50.00th=[ 84], 60.00th=[ 101], 00:24:10.578 | 70.00th=[ 108], 80.00th=[ 111], 90.00th=[ 118], 95.00th=[ 120], 00:24:10.578 | 99.00th=[ 123], 99.50th=[ 124], 99.90th=[ 130], 99.95th=[ 131], 00:24:10.578 | 99.99th=[ 131] 00:24:10.578 bw ( KiB/s): min= 661, max= 928, per=4.30%, avg=717.50, stdev=75.01, samples=20 00:24:10.578 iops : min= 165, max= 232, avg=179.35, stdev=18.76, samples=20 00:24:10.578 lat (msec) : 50=3.93%, 100=56.06%, 250=40.01% 00:24:10.578 cpu : usr=41.18%, sys=2.53%, ctx=1378, majf=0, minf=9 00:24:10.578 IO depths : 1=0.1%, 2=0.4%, 4=1.7%, 8=82.3%, 16=15.4%, 32=0.0%, >=64=0.0% 00:24:10.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.578 complete : 0=0.0%, 4=87.1%, 8=12.5%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.578 issued rwts: total=1807,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.578 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:10.578 filename1: (groupid=0, jobs=1): err= 0: pid=98716: Tue Nov 19 12:44:14 2024 00:24:10.578 read: IOPS=171, BW=688KiB/s (704kB/s)(6888KiB/10014msec) 00:24:10.578 slat (usec): min=3, max=8033, avg=28.41, stdev=334.28 00:24:10.578 clat (msec): min=41, max=156, avg=92.85, stdev=20.51 00:24:10.578 lat (msec): min=41, max=156, avg=92.88, stdev=20.50 00:24:10.578 clat percentiles (msec): 00:24:10.578 | 1.00th=[ 57], 5.00th=[ 62], 10.00th=[ 71], 20.00th=[ 72], 00:24:10.578 | 30.00th=[ 72], 40.00th=[ 84], 50.00th=[ 99], 60.00th=[ 108], 00:24:10.578 | 70.00th=[ 108], 80.00th=[ 110], 90.00th=[ 120], 95.00th=[ 121], 00:24:10.578 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 157], 99.95th=[ 157], 00:24:10.578 | 99.99th=[ 157] 00:24:10.578 bw ( KiB/s): min= 528, max= 784, per=4.10%, avg=684.65, stdev=64.29, samples=20 00:24:10.578 iops : min= 132, max= 196, avg=171.15, stdev=16.08, samples=20 00:24:10.578 lat (msec) : 50=0.87%, 100=50.87%, 250=48.26% 00:24:10.578 cpu : usr=31.27%, sys=1.91%, ctx=871, majf=0, minf=9 00:24:10.578 IO depths : 1=0.1%, 2=2.0%, 4=8.2%, 8=75.0%, 16=14.6%, 32=0.0%, >=64=0.0% 00:24:10.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.578 complete : 0=0.0%, 4=89.1%, 8=9.1%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.578 issued rwts: total=1722,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.578 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:10.578 filename1: (groupid=0, jobs=1): err= 0: pid=98717: Tue Nov 19 12:44:14 2024 00:24:10.578 read: IOPS=176, BW=707KiB/s (724kB/s)(7108KiB/10059msec) 00:24:10.578 slat (usec): min=3, max=8022, avg=18.42, stdev=190.03 00:24:10.578 clat (msec): min=36, max=155, avg=90.30, stdev=22.74 00:24:10.578 lat (msec): min=36, max=155, avg=90.32, stdev=22.74 00:24:10.578 clat percentiles (msec): 00:24:10.578 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 61], 20.00th=[ 72], 00:24:10.578 | 30.00th=[ 72], 40.00th=[ 84], 50.00th=[ 93], 60.00th=[ 108], 00:24:10.578 | 70.00th=[ 108], 80.00th=[ 110], 90.00th=[ 121], 95.00th=[ 121], 00:24:10.578 | 99.00th=[ 121], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 157], 00:24:10.578 | 99.99th=[ 157] 00:24:10.578 bw ( KiB/s): min= 584, max= 1024, per=4.24%, avg=706.40, stdev=111.49, samples=20 00:24:10.578 iops : min= 146, max= 256, avg=176.60, stdev=27.87, samples=20 00:24:10.578 lat (msec) : 50=5.46%, 100=51.15%, 250=43.39% 00:24:10.578 cpu : usr=31.62%, sys=1.71%, ctx=870, majf=0, minf=9 00:24:10.578 IO depths : 1=0.1%, 2=0.5%, 4=1.7%, 8=81.8%, 16=16.0%, 32=0.0%, >=64=0.0% 00:24:10.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.578 complete : 0=0.0%, 4=87.6%, 8=12.0%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.578 issued rwts: total=1777,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.578 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:10.578 filename2: (groupid=0, jobs=1): err= 0: pid=98718: Tue Nov 19 12:44:14 2024 00:24:10.578 read: IOPS=177, BW=711KiB/s (728kB/s)(7148KiB/10055msec) 00:24:10.578 slat (usec): min=3, max=10569, avg=31.27, stdev=345.87 00:24:10.578 clat (msec): min=27, max=154, avg=89.72, stdev=24.03 00:24:10.578 lat (msec): min=27, max=154, avg=89.75, stdev=24.03 00:24:10.578 clat percentiles (msec): 00:24:10.578 | 1.00th=[ 35], 5.00th=[ 46], 10.00th=[ 58], 20.00th=[ 69], 00:24:10.578 | 30.00th=[ 73], 40.00th=[ 81], 50.00th=[ 94], 60.00th=[ 106], 00:24:10.578 | 70.00th=[ 109], 80.00th=[ 113], 90.00th=[ 118], 95.00th=[ 121], 00:24:10.578 | 99.00th=[ 125], 99.50th=[ 132], 99.90th=[ 148], 99.95th=[ 155], 00:24:10.578 | 99.99th=[ 155] 00:24:10.578 bw ( KiB/s): min= 576, max= 1128, per=4.25%, avg=708.40, stdev=139.33, samples=20 00:24:10.578 iops : min= 144, max= 282, avg=177.10, stdev=34.83, samples=20 00:24:10.578 lat (msec) : 50=7.16%, 100=46.33%, 250=46.50% 00:24:10.578 cpu : usr=35.86%, sys=2.06%, ctx=1710, majf=0, minf=9 00:24:10.578 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=82.6%, 16=16.3%, 32=0.0%, >=64=0.0% 00:24:10.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.578 complete : 0=0.0%, 4=87.4%, 8=12.4%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.578 issued rwts: total=1787,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.578 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:10.578 filename2: (groupid=0, jobs=1): err= 0: pid=98719: Tue Nov 19 12:44:14 2024 00:24:10.578 read: IOPS=176, BW=705KiB/s (722kB/s)(7068KiB/10023msec) 00:24:10.578 slat (usec): min=8, max=8024, avg=21.68, stdev=213.18 00:24:10.578 clat (msec): min=38, max=146, avg=90.65, stdev=21.42 00:24:10.578 lat (msec): min=38, max=146, avg=90.67, stdev=21.42 00:24:10.578 clat percentiles (msec): 00:24:10.578 | 1.00th=[ 44], 5.00th=[ 58], 10.00th=[ 67], 20.00th=[ 71], 00:24:10.578 | 30.00th=[ 74], 40.00th=[ 80], 50.00th=[ 87], 60.00th=[ 106], 00:24:10.578 | 70.00th=[ 109], 80.00th=[ 112], 90.00th=[ 117], 95.00th=[ 120], 00:24:10.578 | 99.00th=[ 125], 99.50th=[ 127], 99.90th=[ 144], 99.95th=[ 148], 00:24:10.578 | 99.99th=[ 148] 00:24:10.578 bw ( KiB/s): min= 616, max= 920, per=4.20%, avg=700.45, stdev=82.40, samples=20 00:24:10.578 iops : min= 154, max= 230, avg=175.10, stdev=20.60, samples=20 00:24:10.578 lat (msec) : 50=3.34%, 100=50.48%, 250=46.18% 00:24:10.578 cpu : usr=41.79%, sys=2.27%, ctx=1247, majf=0, minf=9 00:24:10.578 IO depths : 1=0.1%, 2=0.5%, 4=2.0%, 8=81.7%, 16=15.8%, 32=0.0%, >=64=0.0% 00:24:10.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.578 complete : 0=0.0%, 4=87.5%, 8=12.0%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.578 issued rwts: total=1767,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.578 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:10.578 filename2: (groupid=0, jobs=1): err= 0: pid=98720: Tue Nov 19 12:44:14 2024 00:24:10.578 read: IOPS=174, BW=700KiB/s (716kB/s)(7036KiB/10058msec) 00:24:10.578 slat (usec): min=4, max=8025, avg=20.16, stdev=213.59 00:24:10.578 clat (msec): min=24, max=148, avg=91.24, stdev=23.50 00:24:10.578 lat (msec): min=24, max=148, avg=91.26, stdev=23.50 00:24:10.578 clat percentiles (msec): 00:24:10.578 | 1.00th=[ 34], 5.00th=[ 48], 10.00th=[ 61], 20.00th=[ 72], 00:24:10.578 | 30.00th=[ 73], 40.00th=[ 84], 50.00th=[ 99], 60.00th=[ 108], 00:24:10.578 | 70.00th=[ 108], 80.00th=[ 111], 90.00th=[ 120], 95.00th=[ 121], 00:24:10.578 | 99.00th=[ 126], 99.50th=[ 140], 99.90th=[ 144], 99.95th=[ 148], 00:24:10.578 | 99.99th=[ 148] 00:24:10.578 bw ( KiB/s): min= 592, max= 1144, per=4.18%, avg=697.20, stdev=133.75, samples=20 00:24:10.578 iops : min= 148, max= 286, avg=174.30, stdev=33.44, samples=20 00:24:10.578 lat (msec) : 50=6.20%, 100=46.11%, 250=47.70% 00:24:10.578 cpu : usr=34.21%, sys=1.78%, ctx=954, majf=0, minf=9 00:24:10.578 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=82.5%, 16=16.7%, 32=0.0%, >=64=0.0% 00:24:10.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.578 complete : 0=0.0%, 4=87.7%, 8=12.2%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.578 issued rwts: total=1759,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.578 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:10.578 filename2: (groupid=0, jobs=1): err= 0: pid=98721: Tue Nov 19 12:44:14 2024 00:24:10.578 read: IOPS=170, BW=682KiB/s (698kB/s)(6828KiB/10016msec) 00:24:10.578 slat (usec): min=8, max=8036, avg=22.77, stdev=216.98 00:24:10.578 clat (msec): min=41, max=155, avg=93.75, stdev=22.12 00:24:10.578 lat (msec): min=41, max=155, avg=93.77, stdev=22.11 00:24:10.578 clat percentiles (msec): 00:24:10.578 | 1.00th=[ 58], 5.00th=[ 64], 10.00th=[ 69], 20.00th=[ 72], 00:24:10.578 | 30.00th=[ 73], 40.00th=[ 82], 50.00th=[ 100], 60.00th=[ 107], 00:24:10.578 | 70.00th=[ 111], 80.00th=[ 116], 90.00th=[ 121], 95.00th=[ 124], 00:24:10.578 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 155], 99.95th=[ 155], 00:24:10.578 | 99.99th=[ 155] 00:24:10.578 bw ( KiB/s): min= 512, max= 768, per=4.06%, avg=677.70, stdev=72.60, samples=20 00:24:10.578 iops : min= 128, max= 192, avg=169.40, stdev=18.16, samples=20 00:24:10.578 lat (msec) : 50=0.53%, 100=50.32%, 250=49.15% 00:24:10.578 cpu : usr=35.08%, sys=2.17%, ctx=1004, majf=0, minf=9 00:24:10.578 IO depths : 1=0.1%, 2=2.1%, 4=8.1%, 8=75.2%, 16=14.6%, 32=0.0%, >=64=0.0% 00:24:10.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.578 complete : 0=0.0%, 4=89.1%, 8=9.1%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.578 issued rwts: total=1707,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.578 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:10.578 filename2: (groupid=0, jobs=1): err= 0: pid=98722: Tue Nov 19 12:44:14 2024 00:24:10.578 read: IOPS=178, BW=715KiB/s (732kB/s)(7184KiB/10050msec) 00:24:10.578 slat (nsec): min=4940, max=36085, avg=14988.83, stdev=4635.12 00:24:10.578 clat (msec): min=35, max=143, avg=89.27, stdev=22.91 00:24:10.579 lat (msec): min=35, max=144, avg=89.29, stdev=22.91 00:24:10.579 clat percentiles (msec): 00:24:10.579 | 1.00th=[ 38], 5.00th=[ 48], 10.00th=[ 61], 20.00th=[ 72], 00:24:10.579 | 30.00th=[ 72], 40.00th=[ 81], 50.00th=[ 86], 60.00th=[ 106], 00:24:10.579 | 70.00th=[ 108], 80.00th=[ 110], 90.00th=[ 120], 95.00th=[ 121], 00:24:10.579 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 144], 99.95th=[ 144], 00:24:10.579 | 99.99th=[ 144] 00:24:10.579 bw ( KiB/s): min= 616, max= 1024, per=4.28%, avg=714.55, stdev=115.89, samples=20 00:24:10.579 iops : min= 154, max= 256, avg=178.60, stdev=28.87, samples=20 00:24:10.579 lat (msec) : 50=6.29%, 100=50.39%, 250=43.32% 00:24:10.579 cpu : usr=34.32%, sys=2.10%, ctx=965, majf=0, minf=9 00:24:10.579 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.1%, 16=16.1%, 32=0.0%, >=64=0.0% 00:24:10.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.579 complete : 0=0.0%, 4=87.2%, 8=12.7%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.579 issued rwts: total=1796,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.579 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:10.579 filename2: (groupid=0, jobs=1): err= 0: pid=98723: Tue Nov 19 12:44:14 2024 00:24:10.579 read: IOPS=174, BW=697KiB/s (714kB/s)(6996KiB/10031msec) 00:24:10.579 slat (usec): min=4, max=8029, avg=25.60, stdev=246.86 00:24:10.579 clat (msec): min=38, max=157, avg=91.62, stdev=20.46 00:24:10.579 lat (msec): min=38, max=157, avg=91.65, stdev=20.46 00:24:10.579 clat percentiles (msec): 00:24:10.579 | 1.00th=[ 56], 5.00th=[ 64], 10.00th=[ 68], 20.00th=[ 72], 00:24:10.579 | 30.00th=[ 73], 40.00th=[ 81], 50.00th=[ 90], 60.00th=[ 105], 00:24:10.579 | 70.00th=[ 108], 80.00th=[ 111], 90.00th=[ 118], 95.00th=[ 121], 00:24:10.579 | 99.00th=[ 134], 99.50th=[ 144], 99.90th=[ 159], 99.95th=[ 159], 00:24:10.579 | 99.99th=[ 159] 00:24:10.579 bw ( KiB/s): min= 576, max= 880, per=4.15%, avg=692.70, stdev=59.61, samples=20 00:24:10.579 iops : min= 144, max= 220, avg=173.15, stdev=14.89, samples=20 00:24:10.579 lat (msec) : 50=0.51%, 100=55.35%, 250=44.14% 00:24:10.579 cpu : usr=43.18%, sys=2.49%, ctx=1315, majf=0, minf=9 00:24:10.579 IO depths : 1=0.1%, 2=1.6%, 4=6.3%, 8=77.2%, 16=14.8%, 32=0.0%, >=64=0.0% 00:24:10.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.579 complete : 0=0.0%, 4=88.5%, 8=10.2%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.579 issued rwts: total=1749,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.579 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:10.579 filename2: (groupid=0, jobs=1): err= 0: pid=98724: Tue Nov 19 12:44:14 2024 00:24:10.579 read: IOPS=180, BW=723KiB/s (740kB/s)(7252KiB/10034msec) 00:24:10.579 slat (usec): min=4, max=8030, avg=28.77, stdev=266.02 00:24:10.579 clat (msec): min=35, max=148, avg=88.40, stdev=21.45 00:24:10.579 lat (msec): min=35, max=148, avg=88.42, stdev=21.44 00:24:10.579 clat percentiles (msec): 00:24:10.579 | 1.00th=[ 45], 5.00th=[ 57], 10.00th=[ 63], 20.00th=[ 72], 00:24:10.579 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 85], 60.00th=[ 97], 00:24:10.579 | 70.00th=[ 108], 80.00th=[ 110], 90.00th=[ 118], 95.00th=[ 121], 00:24:10.579 | 99.00th=[ 126], 99.50th=[ 132], 99.90th=[ 148], 99.95th=[ 148], 00:24:10.579 | 99.99th=[ 148] 00:24:10.579 bw ( KiB/s): min= 664, max= 928, per=4.31%, avg=718.85, stdev=67.39, samples=20 00:24:10.579 iops : min= 166, max= 232, avg=179.70, stdev=16.84, samples=20 00:24:10.579 lat (msec) : 50=3.03%, 100=58.74%, 250=38.22% 00:24:10.579 cpu : usr=39.25%, sys=2.31%, ctx=1109, majf=0, minf=9 00:24:10.579 IO depths : 1=0.1%, 2=0.7%, 4=2.6%, 8=81.4%, 16=15.3%, 32=0.0%, >=64=0.0% 00:24:10.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.579 complete : 0=0.0%, 4=87.3%, 8=12.1%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.579 issued rwts: total=1813,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.579 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:10.579 filename2: (groupid=0, jobs=1): err= 0: pid=98725: Tue Nov 19 12:44:14 2024 00:24:10.579 read: IOPS=168, BW=675KiB/s (691kB/s)(6804KiB/10080msec) 00:24:10.579 slat (usec): min=4, max=5345, avg=17.26, stdev=129.41 00:24:10.579 clat (usec): min=1512, max=159952, avg=94556.17, stdev=28687.09 00:24:10.579 lat (usec): min=1522, max=159961, avg=94573.44, stdev=28684.04 00:24:10.579 clat percentiles (msec): 00:24:10.579 | 1.00th=[ 5], 5.00th=[ 28], 10.00th=[ 68], 20.00th=[ 72], 00:24:10.579 | 30.00th=[ 85], 40.00th=[ 96], 50.00th=[ 108], 60.00th=[ 108], 00:24:10.579 | 70.00th=[ 109], 80.00th=[ 121], 90.00th=[ 121], 95.00th=[ 121], 00:24:10.579 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 161], 00:24:10.579 | 99.99th=[ 161] 00:24:10.579 bw ( KiB/s): min= 528, max= 1648, per=4.04%, avg=674.00, stdev=237.05, samples=20 00:24:10.579 iops : min= 132, max= 412, avg=168.50, stdev=59.26, samples=20 00:24:10.579 lat (msec) : 2=0.94%, 10=3.76%, 50=1.88%, 100=34.80%, 250=58.61% 00:24:10.579 cpu : usr=32.27%, sys=2.04%, ctx=891, majf=0, minf=0 00:24:10.579 IO depths : 1=0.2%, 2=2.0%, 4=7.2%, 8=74.5%, 16=16.0%, 32=0.0%, >=64=0.0% 00:24:10.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.579 complete : 0=0.0%, 4=90.0%, 8=8.5%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.579 issued rwts: total=1701,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.579 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:10.579 00:24:10.579 Run status group 0 (all jobs): 00:24:10.579 READ: bw=16.3MiB/s (17.1MB/s), 610KiB/s-751KiB/s (624kB/s-769kB/s), io=164MiB (172MB), run=10002-10080msec 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:10.579 bdev_null0 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.579 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:10.580 [2024-11-19 12:44:14.434455] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:10.580 bdev_null1 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:10.580 { 00:24:10.580 "params": { 00:24:10.580 "name": "Nvme$subsystem", 00:24:10.580 "trtype": "$TEST_TRANSPORT", 00:24:10.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.580 "adrfam": "ipv4", 00:24:10.580 "trsvcid": "$NVMF_PORT", 00:24:10.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.580 "hdgst": ${hdgst:-false}, 00:24:10.580 "ddgst": ${ddgst:-false} 00:24:10.580 }, 00:24:10.580 "method": "bdev_nvme_attach_controller" 00:24:10.580 } 00:24:10.580 EOF 00:24:10.580 )") 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:10.580 { 00:24:10.580 "params": { 00:24:10.580 "name": "Nvme$subsystem", 00:24:10.580 "trtype": "$TEST_TRANSPORT", 00:24:10.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.580 "adrfam": "ipv4", 00:24:10.580 "trsvcid": "$NVMF_PORT", 00:24:10.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.580 "hdgst": ${hdgst:-false}, 00:24:10.580 "ddgst": ${ddgst:-false} 00:24:10.580 }, 00:24:10.580 "method": "bdev_nvme_attach_controller" 00:24:10.580 } 00:24:10.580 EOF 00:24:10.580 )") 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:24:10.580 "params": { 00:24:10.580 "name": "Nvme0", 00:24:10.580 "trtype": "tcp", 00:24:10.580 "traddr": "10.0.0.3", 00:24:10.580 "adrfam": "ipv4", 00:24:10.580 "trsvcid": "4420", 00:24:10.580 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:10.580 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:10.580 "hdgst": false, 00:24:10.580 "ddgst": false 00:24:10.580 }, 00:24:10.580 "method": "bdev_nvme_attach_controller" 00:24:10.580 },{ 00:24:10.580 "params": { 00:24:10.580 "name": "Nvme1", 00:24:10.580 "trtype": "tcp", 00:24:10.580 "traddr": "10.0.0.3", 00:24:10.580 "adrfam": "ipv4", 00:24:10.580 "trsvcid": "4420", 00:24:10.580 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:10.580 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:10.580 "hdgst": false, 00:24:10.580 "ddgst": false 00:24:10.580 }, 00:24:10.580 "method": "bdev_nvme_attach_controller" 00:24:10.580 }' 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:10.580 12:44:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:10.580 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:24:10.580 ... 00:24:10.580 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:24:10.580 ... 00:24:10.580 fio-3.35 00:24:10.580 Starting 4 threads 00:24:15.855 00:24:15.855 filename0: (groupid=0, jobs=1): err= 0: pid=98871: Tue Nov 19 12:44:20 2024 00:24:15.855 read: IOPS=2079, BW=16.2MiB/s (17.0MB/s)(81.2MiB/5001msec) 00:24:15.855 slat (nsec): min=3150, max=61320, avg=14936.99, stdev=4599.87 00:24:15.855 clat (usec): min=2818, max=5214, avg=3788.32, stdev=242.38 00:24:15.855 lat (usec): min=2828, max=5228, avg=3803.26, stdev=243.03 00:24:15.855 clat percentiles (usec): 00:24:15.855 | 1.00th=[ 3458], 5.00th=[ 3523], 10.00th=[ 3556], 20.00th=[ 3621], 00:24:15.855 | 30.00th=[ 3621], 40.00th=[ 3654], 50.00th=[ 3720], 60.00th=[ 3785], 00:24:15.855 | 70.00th=[ 3884], 80.00th=[ 3949], 90.00th=[ 4080], 95.00th=[ 4293], 00:24:15.855 | 99.00th=[ 4686], 99.50th=[ 4817], 99.90th=[ 5080], 99.95th=[ 5145], 00:24:15.855 | 99.99th=[ 5211] 00:24:15.855 bw ( KiB/s): min=15360, max=17408, per=24.20%, avg=16586.56, stdev=792.74, samples=9 00:24:15.855 iops : min= 1920, max= 2176, avg=2073.22, stdev=99.24, samples=9 00:24:15.855 lat (msec) : 4=85.62%, 10=14.37% 00:24:15.855 cpu : usr=91.60%, sys=7.56%, ctx=14, majf=0, minf=9 00:24:15.855 IO depths : 1=0.1%, 2=25.0%, 4=50.0%, 8=25.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:15.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.855 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.855 issued rwts: total=10400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:15.855 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:15.855 filename0: (groupid=0, jobs=1): err= 0: pid=98872: Tue Nov 19 12:44:20 2024 00:24:15.855 read: IOPS=2109, BW=16.5MiB/s (17.3MB/s)(82.4MiB/5001msec) 00:24:15.855 slat (usec): min=3, max=141, avg=14.34, stdev= 5.24 00:24:15.855 clat (usec): min=1027, max=7329, avg=3737.06, stdev=336.67 00:24:15.855 lat (usec): min=1034, max=7389, avg=3751.40, stdev=337.01 00:24:15.855 clat percentiles (usec): 00:24:15.855 | 1.00th=[ 1893], 5.00th=[ 3523], 10.00th=[ 3556], 20.00th=[ 3589], 00:24:15.855 | 30.00th=[ 3621], 40.00th=[ 3654], 50.00th=[ 3687], 60.00th=[ 3752], 00:24:15.855 | 70.00th=[ 3851], 80.00th=[ 3949], 90.00th=[ 4047], 95.00th=[ 4178], 00:24:15.855 | 99.00th=[ 4424], 99.50th=[ 4555], 99.90th=[ 4883], 99.95th=[ 5014], 00:24:15.855 | 99.99th=[ 5080] 00:24:15.855 bw ( KiB/s): min=15472, max=17408, per=24.54%, avg=16821.33, stdev=685.39, samples=9 00:24:15.855 iops : min= 1934, max= 2176, avg=2102.67, stdev=85.67, samples=9 00:24:15.855 lat (msec) : 2=1.06%, 4=85.74%, 10=13.20% 00:24:15.855 cpu : usr=91.38%, sys=7.74%, ctx=26, majf=0, minf=0 00:24:15.855 IO depths : 1=0.1%, 2=23.8%, 4=50.7%, 8=25.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:15.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.855 complete : 0=0.0%, 4=90.5%, 8=9.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.855 issued rwts: total=10549,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:15.855 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:15.855 filename1: (groupid=0, jobs=1): err= 0: pid=98873: Tue Nov 19 12:44:20 2024 00:24:15.855 read: IOPS=2079, BW=16.2MiB/s (17.0MB/s)(81.2MiB/5001msec) 00:24:15.855 slat (nsec): min=3149, max=71448, avg=14518.39, stdev=4601.39 00:24:15.855 clat (usec): min=2785, max=5226, avg=3790.65, stdev=242.28 00:24:15.855 lat (usec): min=2792, max=5237, avg=3805.17, stdev=242.87 00:24:15.855 clat percentiles (usec): 00:24:15.855 | 1.00th=[ 3458], 5.00th=[ 3556], 10.00th=[ 3556], 20.00th=[ 3621], 00:24:15.855 | 30.00th=[ 3654], 40.00th=[ 3654], 50.00th=[ 3720], 60.00th=[ 3785], 00:24:15.855 | 70.00th=[ 3884], 80.00th=[ 3949], 90.00th=[ 4080], 95.00th=[ 4293], 00:24:15.855 | 99.00th=[ 4686], 99.50th=[ 4817], 99.90th=[ 5080], 99.95th=[ 5145], 00:24:15.855 | 99.99th=[ 5211] 00:24:15.855 bw ( KiB/s): min=15360, max=17408, per=24.20%, avg=16586.56, stdev=792.74, samples=9 00:24:15.855 iops : min= 1920, max= 2176, avg=2073.22, stdev=99.24, samples=9 00:24:15.855 lat (msec) : 4=85.38%, 10=14.62% 00:24:15.855 cpu : usr=92.04%, sys=7.14%, ctx=48, majf=0, minf=0 00:24:15.855 IO depths : 1=0.1%, 2=25.0%, 4=50.0%, 8=25.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:15.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.856 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.856 issued rwts: total=10400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:15.856 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:15.856 filename1: (groupid=0, jobs=1): err= 0: pid=98874: Tue Nov 19 12:44:20 2024 00:24:15.856 read: IOPS=2303, BW=18.0MiB/s (18.9MB/s)(90.0MiB/5004msec) 00:24:15.856 slat (nsec): min=4724, max=52461, avg=12078.05, stdev=4416.39 00:24:15.856 clat (usec): min=601, max=8503, avg=3431.76, stdev=736.84 00:24:15.856 lat (usec): min=610, max=8520, avg=3443.84, stdev=737.56 00:24:15.856 clat percentiles (usec): 00:24:15.856 | 1.00th=[ 1319], 5.00th=[ 1401], 10.00th=[ 2573], 20.00th=[ 3458], 00:24:15.856 | 30.00th=[ 3589], 40.00th=[ 3621], 50.00th=[ 3654], 60.00th=[ 3687], 00:24:15.856 | 70.00th=[ 3720], 80.00th=[ 3818], 90.00th=[ 3949], 95.00th=[ 4080], 00:24:15.856 | 99.00th=[ 4359], 99.50th=[ 4555], 99.90th=[ 4883], 99.95th=[ 8291], 00:24:15.856 | 99.99th=[ 8356] 00:24:15.856 bw ( KiB/s): min=16768, max=22000, per=26.88%, avg=18428.80, stdev=2109.85, samples=10 00:24:15.856 iops : min= 2096, max= 2750, avg=2303.60, stdev=263.73, samples=10 00:24:15.856 lat (usec) : 750=0.03%, 1000=0.03% 00:24:15.856 lat (msec) : 2=9.41%, 4=83.62%, 10=6.90% 00:24:15.856 cpu : usr=92.36%, sys=6.74%, ctx=22, majf=0, minf=0 00:24:15.856 IO depths : 1=0.1%, 2=16.1%, 4=54.9%, 8=29.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:15.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.856 complete : 0=0.0%, 4=93.8%, 8=6.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.856 issued rwts: total=11526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:15.856 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:15.856 00:24:15.856 Run status group 0 (all jobs): 00:24:15.856 READ: bw=66.9MiB/s (70.2MB/s), 16.2MiB/s-18.0MiB/s (17.0MB/s-18.9MB/s), io=335MiB (351MB), run=5001-5004msec 00:24:15.856 12:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:24:15.856 12:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:15.856 12:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:15.856 12:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:15.856 12:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:15.856 12:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:15.856 12:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.856 12:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:15.856 12:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.856 12:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:15.856 12:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.856 12:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:15.856 12:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.856 12:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:15.856 12:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:15.856 12:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:24:15.856 12:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:15.856 12:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.856 12:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:15.856 12:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.856 12:44:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:15.856 12:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.856 12:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:15.856 ************************************ 00:24:15.856 END TEST fio_dif_rand_params 00:24:15.856 ************************************ 00:24:15.856 12:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.856 00:24:15.856 real 0m22.988s 00:24:15.856 user 2m3.746s 00:24:15.856 sys 0m8.652s 00:24:15.856 12:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:15.856 12:44:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:15.856 12:44:20 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:24:15.856 12:44:20 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:15.856 12:44:20 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:15.856 12:44:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:15.856 ************************************ 00:24:15.856 START TEST fio_dif_digest 00:24:15.856 ************************************ 00:24:15.856 12:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:24:15.856 12:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:24:15.856 12:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:24:15.856 12:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:24:15.856 12:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:24:15.856 12:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:24:15.856 12:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:24:15.856 12:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:24:15.856 12:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:24:15.856 12:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:24:15.856 12:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:24:15.856 12:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:24:15.856 12:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:24:15.856 12:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:24:15.856 12:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:24:15.856 12:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:24:15.856 12:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:24:15.856 12:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.856 12:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:15.856 bdev_null0 00:24:15.856 12:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.856 12:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:15.856 12:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.856 12:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:15.856 12:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.856 12:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:15.856 12:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.856 12:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:15.856 12:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.856 12:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:15.856 12:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.856 12:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:15.856 [2024-11-19 12:44:20.442734] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:15.856 12:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.856 12:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:24:15.856 12:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:24:15.856 12:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:15.856 12:44:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # config=() 00:24:15.856 12:44:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # local subsystem config 00:24:15.856 12:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:15.856 12:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:24:15.856 12:44:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:15.856 12:44:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:15.856 { 00:24:15.856 "params": { 00:24:15.856 "name": "Nvme$subsystem", 00:24:15.856 "trtype": "$TEST_TRANSPORT", 00:24:15.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:15.856 "adrfam": "ipv4", 00:24:15.856 "trsvcid": "$NVMF_PORT", 00:24:15.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:15.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:15.856 "hdgst": ${hdgst:-false}, 00:24:15.856 "ddgst": ${ddgst:-false} 00:24:15.856 }, 00:24:15.856 "method": "bdev_nvme_attach_controller" 00:24:15.856 } 00:24:15.856 EOF 00:24:15.856 )") 00:24:15.856 12:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:24:15.856 12:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:24:15.856 12:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:15.857 12:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:15.857 12:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:15.857 12:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:15.857 12:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:15.857 12:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:24:15.857 12:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:15.857 12:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:15.857 12:44:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # cat 00:24:15.857 12:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:15.857 12:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:24:15.857 12:44:20 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:24:15.857 12:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:24:15.857 12:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:15.857 12:44:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # jq . 00:24:15.857 12:44:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@581 -- # IFS=, 00:24:15.857 12:44:20 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:24:15.857 "params": { 00:24:15.857 "name": "Nvme0", 00:24:15.857 "trtype": "tcp", 00:24:15.857 "traddr": "10.0.0.3", 00:24:15.857 "adrfam": "ipv4", 00:24:15.857 "trsvcid": "4420", 00:24:15.857 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:15.857 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:15.857 "hdgst": true, 00:24:15.857 "ddgst": true 00:24:15.857 }, 00:24:15.857 "method": "bdev_nvme_attach_controller" 00:24:15.857 }' 00:24:15.857 12:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:15.857 12:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:15.857 12:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:15.857 12:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:15.857 12:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:15.857 12:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:15.857 12:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:15.857 12:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:15.857 12:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:15.857 12:44:20 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:15.857 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:24:15.857 ... 00:24:15.857 fio-3.35 00:24:15.857 Starting 3 threads 00:24:28.072 00:24:28.072 filename0: (groupid=0, jobs=1): err= 0: pid=98980: Tue Nov 19 12:44:31 2024 00:24:28.072 read: IOPS=251, BW=31.4MiB/s (33.0MB/s)(315MiB/10008msec) 00:24:28.072 slat (nsec): min=6745, max=51651, avg=9655.95, stdev=4178.46 00:24:28.072 clat (usec): min=10547, max=13847, avg=11905.29, stdev=445.07 00:24:28.072 lat (usec): min=10555, max=13860, avg=11914.94, stdev=445.48 00:24:28.072 clat percentiles (usec): 00:24:28.072 | 1.00th=[11469], 5.00th=[11469], 10.00th=[11600], 20.00th=[11600], 00:24:28.072 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11731], 60.00th=[11863], 00:24:28.072 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12518], 95.00th=[12911], 00:24:28.072 | 99.00th=[13566], 99.50th=[13698], 99.90th=[13829], 99.95th=[13829], 00:24:28.072 | 99.99th=[13829] 00:24:28.072 bw ( KiB/s): min=31488, max=33792, per=33.32%, avg=32175.16, stdev=719.30, samples=19 00:24:28.072 iops : min= 246, max= 264, avg=251.37, stdev= 5.62, samples=19 00:24:28.072 lat (msec) : 20=100.00% 00:24:28.072 cpu : usr=90.92%, sys=8.51%, ctx=24, majf=0, minf=9 00:24:28.072 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:28.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:28.072 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:28.072 issued rwts: total=2517,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:28.072 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:28.072 filename0: (groupid=0, jobs=1): err= 0: pid=98981: Tue Nov 19 12:44:31 2024 00:24:28.072 read: IOPS=251, BW=31.4MiB/s (33.0MB/s)(315MiB/10008msec) 00:24:28.072 slat (nsec): min=6743, max=44123, avg=9401.50, stdev=3843.26 00:24:28.072 clat (usec): min=7900, max=13835, avg=11904.93, stdev=468.82 00:24:28.072 lat (usec): min=7922, max=13861, avg=11914.33, stdev=468.99 00:24:28.072 clat percentiles (usec): 00:24:28.072 | 1.00th=[11469], 5.00th=[11469], 10.00th=[11600], 20.00th=[11600], 00:24:28.072 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11731], 60.00th=[11863], 00:24:28.072 | 70.00th=[11994], 80.00th=[12125], 90.00th=[12518], 95.00th=[12911], 00:24:28.072 | 99.00th=[13435], 99.50th=[13698], 99.90th=[13829], 99.95th=[13829], 00:24:28.072 | 99.99th=[13829] 00:24:28.072 bw ( KiB/s): min=31488, max=33024, per=33.32%, avg=32175.16, stdev=566.38, samples=19 00:24:28.072 iops : min= 246, max= 258, avg=251.37, stdev= 4.42, samples=19 00:24:28.072 lat (msec) : 10=0.12%, 20=99.88% 00:24:28.072 cpu : usr=90.48%, sys=8.99%, ctx=16, majf=0, minf=0 00:24:28.072 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:28.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:28.072 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:28.072 issued rwts: total=2517,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:28.072 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:28.072 filename0: (groupid=0, jobs=1): err= 0: pid=98982: Tue Nov 19 12:44:31 2024 00:24:28.072 read: IOPS=251, BW=31.4MiB/s (33.0MB/s)(315MiB/10008msec) 00:24:28.072 slat (nsec): min=6727, max=42476, avg=9358.30, stdev=3632.16 00:24:28.072 clat (usec): min=9277, max=13721, avg=11904.84, stdev=451.94 00:24:28.072 lat (usec): min=9284, max=13734, avg=11914.20, stdev=452.12 00:24:28.072 clat percentiles (usec): 00:24:28.072 | 1.00th=[11469], 5.00th=[11469], 10.00th=[11600], 20.00th=[11600], 00:24:28.072 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11731], 60.00th=[11863], 00:24:28.072 | 70.00th=[11994], 80.00th=[12256], 90.00th=[12518], 95.00th=[12911], 00:24:28.072 | 99.00th=[13435], 99.50th=[13566], 99.90th=[13698], 99.95th=[13698], 00:24:28.072 | 99.99th=[13698] 00:24:28.072 bw ( KiB/s): min=31488, max=33024, per=33.32%, avg=32175.16, stdev=566.38, samples=19 00:24:28.072 iops : min= 246, max= 258, avg=251.37, stdev= 4.42, samples=19 00:24:28.072 lat (msec) : 10=0.12%, 20=99.88% 00:24:28.072 cpu : usr=91.20%, sys=8.25%, ctx=13, majf=0, minf=0 00:24:28.072 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:28.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:28.072 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:28.072 issued rwts: total=2517,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:28.072 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:28.072 00:24:28.072 Run status group 0 (all jobs): 00:24:28.072 READ: bw=94.3MiB/s (98.9MB/s), 31.4MiB/s-31.4MiB/s (33.0MB/s-33.0MB/s), io=944MiB (990MB), run=10008-10008msec 00:24:28.072 12:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:24:28.072 12:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:24:28.072 12:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:24:28.072 12:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:28.072 12:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:24:28.072 12:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:28.072 12:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.072 12:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:28.072 12:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.072 12:44:31 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:28.072 12:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.072 12:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:28.072 12:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.072 00:24:28.072 real 0m10.871s 00:24:28.072 user 0m27.841s 00:24:28.072 sys 0m2.811s 00:24:28.073 ************************************ 00:24:28.073 END TEST fio_dif_digest 00:24:28.073 ************************************ 00:24:28.073 12:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:28.073 12:44:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:28.073 12:44:31 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:24:28.073 12:44:31 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:24:28.073 12:44:31 nvmf_dif -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:28.073 12:44:31 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:24:28.073 12:44:31 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:28.073 12:44:31 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:24:28.073 12:44:31 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:28.073 12:44:31 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:28.073 rmmod nvme_tcp 00:24:28.073 rmmod nvme_fabrics 00:24:28.073 rmmod nvme_keyring 00:24:28.073 12:44:31 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:28.073 12:44:31 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:24:28.073 12:44:31 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:24:28.073 12:44:31 nvmf_dif -- nvmf/common.sh@513 -- # '[' -n 98240 ']' 00:24:28.073 12:44:31 nvmf_dif -- nvmf/common.sh@514 -- # killprocess 98240 00:24:28.073 12:44:31 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 98240 ']' 00:24:28.073 12:44:31 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 98240 00:24:28.073 12:44:31 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:24:28.073 12:44:31 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:28.073 12:44:31 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98240 00:24:28.073 killing process with pid 98240 00:24:28.073 12:44:31 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:28.073 12:44:31 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:28.073 12:44:31 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98240' 00:24:28.073 12:44:31 nvmf_dif -- common/autotest_common.sh@969 -- # kill 98240 00:24:28.073 12:44:31 nvmf_dif -- common/autotest_common.sh@974 -- # wait 98240 00:24:28.073 12:44:31 nvmf_dif -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:24:28.073 12:44:31 nvmf_dif -- nvmf/common.sh@517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:28.073 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:28.073 Waiting for block devices as requested 00:24:28.073 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:28.073 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:28.073 12:44:32 nvmf_dif -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:24:28.073 12:44:32 nvmf_dif -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:24:28.073 12:44:32 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:24:28.073 12:44:32 nvmf_dif -- nvmf/common.sh@787 -- # iptables-save 00:24:28.073 12:44:32 nvmf_dif -- nvmf/common.sh@787 -- # iptables-restore 00:24:28.073 12:44:32 nvmf_dif -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:24:28.073 12:44:32 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:28.073 12:44:32 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:28.073 12:44:32 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:28.073 12:44:32 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:28.073 12:44:32 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:28.073 12:44:32 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:28.073 12:44:32 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:28.073 12:44:32 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:28.073 12:44:32 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:28.073 12:44:32 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:28.073 12:44:32 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:28.073 12:44:32 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:28.073 12:44:32 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:28.073 12:44:32 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:28.073 12:44:32 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:28.073 12:44:32 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:28.073 12:44:32 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.073 12:44:32 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:28.073 12:44:32 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.073 12:44:32 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:24:28.073 ************************************ 00:24:28.073 END TEST nvmf_dif 00:24:28.073 ************************************ 00:24:28.073 00:24:28.073 real 0m58.387s 00:24:28.073 user 3m44.691s 00:24:28.073 sys 0m20.329s 00:24:28.073 12:44:32 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:28.073 12:44:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:28.073 12:44:32 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:24:28.073 12:44:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:28.073 12:44:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:28.073 12:44:32 -- common/autotest_common.sh@10 -- # set +x 00:24:28.073 ************************************ 00:24:28.073 START TEST nvmf_abort_qd_sizes 00:24:28.073 ************************************ 00:24:28.073 12:44:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:24:28.073 * Looking for test storage... 00:24:28.073 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:28.073 12:44:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:28.073 12:44:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lcov --version 00:24:28.073 12:44:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:28.073 12:44:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:28.073 12:44:32 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:28.073 12:44:32 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:28.073 12:44:32 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:28.073 12:44:32 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:24:28.073 12:44:32 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:24:28.073 12:44:32 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:24:28.073 12:44:32 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:24:28.073 12:44:32 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:24:28.073 12:44:32 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:24:28.073 12:44:32 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:24:28.073 12:44:32 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:28.073 12:44:32 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:24:28.073 12:44:32 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:24:28.073 12:44:32 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:28.073 12:44:32 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:28.073 12:44:32 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:24:28.073 12:44:32 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:24:28.073 12:44:32 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:28.073 12:44:32 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:24:28.073 12:44:32 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:24:28.073 12:44:32 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:24:28.073 12:44:32 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:24:28.073 12:44:32 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:28.073 12:44:32 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:24:28.073 12:44:32 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:24:28.073 12:44:32 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:28.073 12:44:32 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:28.073 12:44:32 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:24:28.073 12:44:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:28.073 12:44:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:28.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.073 --rc genhtml_branch_coverage=1 00:24:28.073 --rc genhtml_function_coverage=1 00:24:28.073 --rc genhtml_legend=1 00:24:28.073 --rc geninfo_all_blocks=1 00:24:28.073 --rc geninfo_unexecuted_blocks=1 00:24:28.073 00:24:28.073 ' 00:24:28.073 12:44:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:28.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.073 --rc genhtml_branch_coverage=1 00:24:28.073 --rc genhtml_function_coverage=1 00:24:28.073 --rc genhtml_legend=1 00:24:28.073 --rc geninfo_all_blocks=1 00:24:28.073 --rc geninfo_unexecuted_blocks=1 00:24:28.073 00:24:28.073 ' 00:24:28.073 12:44:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:28.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.073 --rc genhtml_branch_coverage=1 00:24:28.073 --rc genhtml_function_coverage=1 00:24:28.073 --rc genhtml_legend=1 00:24:28.073 --rc geninfo_all_blocks=1 00:24:28.073 --rc geninfo_unexecuted_blocks=1 00:24:28.073 00:24:28.073 ' 00:24:28.073 12:44:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:28.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.073 --rc genhtml_branch_coverage=1 00:24:28.073 --rc genhtml_function_coverage=1 00:24:28.073 --rc genhtml_legend=1 00:24:28.073 --rc geninfo_all_blocks=1 00:24:28.073 --rc geninfo_unexecuted_blocks=1 00:24:28.073 00:24:28.073 ' 00:24:28.073 12:44:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:28.073 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:24:28.073 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:28.073 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:28.073 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:28.073 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:28.074 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # prepare_net_devs 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@434 -- # local -g is_hw=no 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # remove_spdk_ns 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@456 -- # nvmf_veth_init 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:28.074 Cannot find device "nvmf_init_br" 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:28.074 Cannot find device "nvmf_init_br2" 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:28.074 Cannot find device "nvmf_tgt_br" 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:28.074 Cannot find device "nvmf_tgt_br2" 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:28.074 Cannot find device "nvmf_init_br" 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:28.074 Cannot find device "nvmf_init_br2" 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:28.074 Cannot find device "nvmf_tgt_br" 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:28.074 Cannot find device "nvmf_tgt_br2" 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:28.074 Cannot find device "nvmf_br" 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:28.074 Cannot find device "nvmf_init_if" 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:28.074 Cannot find device "nvmf_init_if2" 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:28.074 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:28.074 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:28.074 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:28.075 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:28.075 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:28.075 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:28.075 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:28.075 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:28.075 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:28.075 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:28.075 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:28.075 12:44:32 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:28.075 12:44:33 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:28.075 12:44:33 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:28.075 12:44:33 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:28.075 12:44:33 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:28.075 12:44:33 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:28.075 12:44:33 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:28.075 12:44:33 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:28.075 12:44:33 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:28.075 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:28.075 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.096 ms 00:24:28.075 00:24:28.075 --- 10.0.0.3 ping statistics --- 00:24:28.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.075 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:24:28.075 12:44:33 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:28.075 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:28.075 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:24:28.075 00:24:28.075 --- 10.0.0.4 ping statistics --- 00:24:28.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.075 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:24:28.075 12:44:33 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:28.075 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:28.075 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:24:28.075 00:24:28.075 --- 10.0.0.1 ping statistics --- 00:24:28.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.075 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:24:28.075 12:44:33 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:28.075 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:28.075 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:24:28.075 00:24:28.075 --- 10.0.0.2 ping statistics --- 00:24:28.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.075 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:24:28.075 12:44:33 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:28.075 12:44:33 nvmf_abort_qd_sizes -- nvmf/common.sh@457 -- # return 0 00:24:28.075 12:44:33 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:24:28.075 12:44:33 nvmf_abort_qd_sizes -- nvmf/common.sh@475 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:28.644 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:28.644 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:28.644 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:28.904 12:44:33 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:28.904 12:44:33 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:24:28.904 12:44:33 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:24:28.904 12:44:33 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:28.904 12:44:33 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:24:28.904 12:44:33 nvmf_abort_qd_sizes -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:24:28.904 12:44:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:24:28.904 12:44:33 nvmf_abort_qd_sizes -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:28.904 12:44:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:28.904 12:44:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:28.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.904 12:44:33 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # nvmfpid=99623 00:24:28.904 12:44:33 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # waitforlisten 99623 00:24:28.904 12:44:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 99623 ']' 00:24:28.904 12:44:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.904 12:44:33 nvmf_abort_qd_sizes -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:24:28.904 12:44:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:28.904 12:44:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.904 12:44:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:28.904 12:44:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:28.904 [2024-11-19 12:44:34.012871] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:28.904 [2024-11-19 12:44:34.012966] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:28.904 [2024-11-19 12:44:34.154443] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:29.164 [2024-11-19 12:44:34.201914] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:29.164 [2024-11-19 12:44:34.201983] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:29.164 [2024-11-19 12:44:34.201997] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:29.164 [2024-11-19 12:44:34.202008] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:29.164 [2024-11-19 12:44:34.202016] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:29.164 [2024-11-19 12:44:34.205703] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:29.164 [2024-11-19 12:44:34.205860] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:29.164 [2024-11-19 12:44:34.205998] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:24:29.164 [2024-11-19 12:44:34.206009] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:29.164 [2024-11-19 12:44:34.245233] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:29.164 12:44:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:29.164 ************************************ 00:24:29.164 START TEST spdk_target_abort 00:24:29.164 ************************************ 00:24:29.164 12:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:24:29.164 12:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:24:29.164 12:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:24:29.164 12:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.164 12:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:29.423 spdk_targetn1 00:24:29.423 12:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.423 12:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:29.423 12:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.423 12:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:29.423 [2024-11-19 12:44:34.459620] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:29.423 12:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.423 12:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:24:29.423 12:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.423 12:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:29.423 12:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.423 12:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:24:29.423 12:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.423 12:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:29.423 12:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.423 12:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:24:29.423 12:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.423 12:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:29.423 [2024-11-19 12:44:34.491815] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:29.423 12:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.423 12:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:24:29.423 12:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:24:29.423 12:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:24:29.423 12:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:24:29.423 12:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:24:29.423 12:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:24:29.423 12:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:24:29.423 12:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:24:29.423 12:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:24:29.424 12:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:29.424 12:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:24:29.424 12:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:29.424 12:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:24:29.424 12:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:29.424 12:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:24:29.424 12:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:29.424 12:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:24:29.424 12:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:29.424 12:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:29.424 12:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:29.424 12:44:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:32.713 Initializing NVMe Controllers 00:24:32.713 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:24:32.713 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:32.713 Initialization complete. Launching workers. 00:24:32.713 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9876, failed: 0 00:24:32.713 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1038, failed to submit 8838 00:24:32.713 success 865, unsuccessful 173, failed 0 00:24:32.713 12:44:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:32.713 12:44:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:36.001 Initializing NVMe Controllers 00:24:36.001 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:24:36.001 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:36.001 Initialization complete. Launching workers. 00:24:36.001 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9029, failed: 0 00:24:36.001 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1185, failed to submit 7844 00:24:36.001 success 372, unsuccessful 813, failed 0 00:24:36.001 12:44:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:36.001 12:44:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:39.288 Initializing NVMe Controllers 00:24:39.288 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:24:39.288 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:39.288 Initialization complete. Launching workers. 00:24:39.288 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31441, failed: 0 00:24:39.288 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2293, failed to submit 29148 00:24:39.288 success 422, unsuccessful 1871, failed 0 00:24:39.288 12:44:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:24:39.288 12:44:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.288 12:44:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:39.288 12:44:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.288 12:44:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:24:39.288 12:44:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.288 12:44:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:39.547 12:44:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.547 12:44:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 99623 00:24:39.547 12:44:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 99623 ']' 00:24:39.547 12:44:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 99623 00:24:39.547 12:44:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:24:39.547 12:44:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:39.547 12:44:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99623 00:24:39.547 12:44:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:39.547 12:44:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:39.547 12:44:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99623' 00:24:39.547 killing process with pid 99623 00:24:39.547 12:44:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 99623 00:24:39.547 12:44:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 99623 00:24:39.547 00:24:39.547 real 0m10.394s 00:24:39.547 user 0m39.828s 00:24:39.547 sys 0m2.060s 00:24:39.547 12:44:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:39.547 12:44:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:39.547 ************************************ 00:24:39.547 END TEST spdk_target_abort 00:24:39.547 ************************************ 00:24:39.806 12:44:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:24:39.806 12:44:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:39.806 12:44:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:39.806 12:44:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:39.806 ************************************ 00:24:39.806 START TEST kernel_target_abort 00:24:39.806 ************************************ 00:24:39.806 12:44:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:24:39.807 12:44:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:24:39.807 12:44:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@765 -- # local ip 00:24:39.807 12:44:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:39.807 12:44:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:39.807 12:44:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.807 12:44:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.807 12:44:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:39.807 12:44:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.807 12:44:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:39.807 12:44:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:39.807 12:44:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:39.807 12:44:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:39.807 12:44:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:39.807 12:44:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:24:39.807 12:44:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:39.807 12:44:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:39.807 12:44:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:39.807 12:44:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # local block nvme 00:24:39.807 12:44:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:24:39.807 12:44:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@666 -- # modprobe nvmet 00:24:39.807 12:44:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:39.807 12:44:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:40.066 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:40.066 Waiting for block devices as requested 00:24:40.066 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:40.325 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:40.325 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:24:40.325 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:40.325 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:24:40.325 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:24:40.325 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:40.325 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:40.325 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:24:40.325 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:40.325 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:24:40.325 No valid GPT data, bailing 00:24:40.325 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:40.325 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:24:40.325 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:24:40.325 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:24:40.325 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:24:40.325 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n2 ]] 00:24:40.325 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n2 00:24:40.325 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:24:40.325 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:24:40.325 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:40.325 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n2 00:24:40.325 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:24:40.325 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:24:40.325 No valid GPT data, bailing 00:24:40.325 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:24:40.325 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:24:40.325 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:24:40.325 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n2 00:24:40.325 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:24:40.325 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n3 ]] 00:24:40.325 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n3 00:24:40.325 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:24:40.325 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:24:40.325 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:40.325 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n3 00:24:40.325 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:24:40.325 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:24:40.585 No valid GPT data, bailing 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n3 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme1n1 ]] 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme1n1 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme1n1 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:24:40.585 No valid GPT data, bailing 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme1n1 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # [[ -b /dev/nvme1n1 ]] 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo 1 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@692 -- # echo /dev/nvme1n1 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo tcp 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 4420 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo ipv4 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 --hostid=bae1b18f-cc14-461e-aa63-e888be1a2cc9 -a 10.0.0.1 -t tcp -s 4420 00:24:40.585 00:24:40.585 Discovery Log Number of Records 2, Generation counter 2 00:24:40.585 =====Discovery Log Entry 0====== 00:24:40.585 trtype: tcp 00:24:40.585 adrfam: ipv4 00:24:40.585 subtype: current discovery subsystem 00:24:40.585 treq: not specified, sq flow control disable supported 00:24:40.585 portid: 1 00:24:40.585 trsvcid: 4420 00:24:40.585 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:40.585 traddr: 10.0.0.1 00:24:40.585 eflags: none 00:24:40.585 sectype: none 00:24:40.585 =====Discovery Log Entry 1====== 00:24:40.585 trtype: tcp 00:24:40.585 adrfam: ipv4 00:24:40.585 subtype: nvme subsystem 00:24:40.585 treq: not specified, sq flow control disable supported 00:24:40.585 portid: 1 00:24:40.585 trsvcid: 4420 00:24:40.585 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:40.585 traddr: 10.0.0.1 00:24:40.585 eflags: none 00:24:40.585 sectype: none 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:40.585 12:44:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:43.909 Initializing NVMe Controllers 00:24:43.909 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:43.909 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:43.909 Initialization complete. Launching workers. 00:24:43.909 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 32204, failed: 0 00:24:43.909 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32204, failed to submit 0 00:24:43.909 success 0, unsuccessful 32204, failed 0 00:24:43.909 12:44:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:43.909 12:44:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:47.200 Initializing NVMe Controllers 00:24:47.200 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:47.200 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:47.200 Initialization complete. Launching workers. 00:24:47.200 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 65171, failed: 0 00:24:47.200 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25855, failed to submit 39316 00:24:47.200 success 0, unsuccessful 25855, failed 0 00:24:47.200 12:44:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:47.200 12:44:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:50.490 Initializing NVMe Controllers 00:24:50.490 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:50.490 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:50.490 Initialization complete. Launching workers. 00:24:50.490 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 68305, failed: 0 00:24:50.490 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17030, failed to submit 51275 00:24:50.490 success 0, unsuccessful 17030, failed 0 00:24:50.490 12:44:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:24:50.490 12:44:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:50.490 12:44:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # echo 0 00:24:50.490 12:44:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:50.490 12:44:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:50.490 12:44:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:50.490 12:44:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:50.490 12:44:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:24:50.490 12:44:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:24:50.490 12:44:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@722 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:50.748 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:51.316 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:51.575 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:51.575 ************************************ 00:24:51.575 END TEST kernel_target_abort 00:24:51.575 ************************************ 00:24:51.575 00:24:51.575 real 0m11.834s 00:24:51.575 user 0m5.764s 00:24:51.575 sys 0m3.374s 00:24:51.575 12:44:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:51.575 12:44:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:51.575 12:44:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:51.575 12:44:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:24:51.575 12:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:51.575 12:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:24:51.575 12:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:51.575 12:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:24:51.575 12:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:51.575 12:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:51.575 rmmod nvme_tcp 00:24:51.575 rmmod nvme_fabrics 00:24:51.575 rmmod nvme_keyring 00:24:51.575 12:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:51.575 Process with pid 99623 is not found 00:24:51.575 12:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:24:51.575 12:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:24:51.575 12:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@513 -- # '[' -n 99623 ']' 00:24:51.575 12:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # killprocess 99623 00:24:51.575 12:44:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 99623 ']' 00:24:51.575 12:44:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 99623 00:24:51.575 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (99623) - No such process 00:24:51.575 12:44:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 99623 is not found' 00:24:51.575 12:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:24:51.575 12:44:56 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:52.142 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:52.142 Waiting for block devices as requested 00:24:52.142 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:52.142 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:52.142 12:44:57 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:24:52.142 12:44:57 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:24:52.142 12:44:57 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:24:52.142 12:44:57 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-save 00:24:52.142 12:44:57 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:24:52.142 12:44:57 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-restore 00:24:52.401 12:44:57 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:52.401 12:44:57 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:52.401 12:44:57 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:52.401 12:44:57 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:52.401 12:44:57 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:52.401 12:44:57 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:52.401 12:44:57 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:52.401 12:44:57 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:52.401 12:44:57 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:52.401 12:44:57 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:52.401 12:44:57 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:52.401 12:44:57 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:52.401 12:44:57 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:52.401 12:44:57 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:52.401 12:44:57 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:52.401 12:44:57 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:52.401 12:44:57 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.401 12:44:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:52.401 12:44:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.401 12:44:57 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:24:52.401 00:24:52.401 real 0m25.193s 00:24:52.401 user 0m46.785s 00:24:52.401 sys 0m6.828s 00:24:52.401 12:44:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:52.401 12:44:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:52.401 ************************************ 00:24:52.401 END TEST nvmf_abort_qd_sizes 00:24:52.401 ************************************ 00:24:52.661 12:44:57 -- spdk/autotest.sh@288 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:24:52.661 12:44:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:52.661 12:44:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:52.661 12:44:57 -- common/autotest_common.sh@10 -- # set +x 00:24:52.661 ************************************ 00:24:52.661 START TEST keyring_file 00:24:52.661 ************************************ 00:24:52.661 12:44:57 keyring_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:24:52.661 * Looking for test storage... 00:24:52.661 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:24:52.661 12:44:57 keyring_file -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:52.661 12:44:57 keyring_file -- common/autotest_common.sh@1681 -- # lcov --version 00:24:52.661 12:44:57 keyring_file -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:52.661 12:44:57 keyring_file -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:52.661 12:44:57 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:52.661 12:44:57 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:52.661 12:44:57 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:52.661 12:44:57 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:24:52.661 12:44:57 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:24:52.661 12:44:57 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:24:52.661 12:44:57 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:24:52.661 12:44:57 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:24:52.661 12:44:57 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:24:52.661 12:44:57 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:24:52.661 12:44:57 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:52.661 12:44:57 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:24:52.661 12:44:57 keyring_file -- scripts/common.sh@345 -- # : 1 00:24:52.661 12:44:57 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:52.661 12:44:57 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:52.661 12:44:57 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:24:52.661 12:44:57 keyring_file -- scripts/common.sh@353 -- # local d=1 00:24:52.661 12:44:57 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:52.661 12:44:57 keyring_file -- scripts/common.sh@355 -- # echo 1 00:24:52.661 12:44:57 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:24:52.661 12:44:57 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:24:52.661 12:44:57 keyring_file -- scripts/common.sh@353 -- # local d=2 00:24:52.661 12:44:57 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:52.661 12:44:57 keyring_file -- scripts/common.sh@355 -- # echo 2 00:24:52.661 12:44:57 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:24:52.661 12:44:57 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:52.661 12:44:57 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:52.661 12:44:57 keyring_file -- scripts/common.sh@368 -- # return 0 00:24:52.661 12:44:57 keyring_file -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:52.661 12:44:57 keyring_file -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:52.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.661 --rc genhtml_branch_coverage=1 00:24:52.661 --rc genhtml_function_coverage=1 00:24:52.661 --rc genhtml_legend=1 00:24:52.661 --rc geninfo_all_blocks=1 00:24:52.661 --rc geninfo_unexecuted_blocks=1 00:24:52.661 00:24:52.661 ' 00:24:52.661 12:44:57 keyring_file -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:52.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.661 --rc genhtml_branch_coverage=1 00:24:52.661 --rc genhtml_function_coverage=1 00:24:52.661 --rc genhtml_legend=1 00:24:52.661 --rc geninfo_all_blocks=1 00:24:52.661 --rc geninfo_unexecuted_blocks=1 00:24:52.661 00:24:52.661 ' 00:24:52.661 12:44:57 keyring_file -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:52.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.661 --rc genhtml_branch_coverage=1 00:24:52.661 --rc genhtml_function_coverage=1 00:24:52.661 --rc genhtml_legend=1 00:24:52.661 --rc geninfo_all_blocks=1 00:24:52.661 --rc geninfo_unexecuted_blocks=1 00:24:52.661 00:24:52.661 ' 00:24:52.661 12:44:57 keyring_file -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:52.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.661 --rc genhtml_branch_coverage=1 00:24:52.661 --rc genhtml_function_coverage=1 00:24:52.661 --rc genhtml_legend=1 00:24:52.661 --rc geninfo_all_blocks=1 00:24:52.661 --rc geninfo_unexecuted_blocks=1 00:24:52.661 00:24:52.661 ' 00:24:52.661 12:44:57 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:24:52.661 12:44:57 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:52.661 12:44:57 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:24:52.661 12:44:57 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:52.661 12:44:57 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:52.661 12:44:57 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:52.661 12:44:57 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:52.661 12:44:57 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:52.661 12:44:57 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:52.661 12:44:57 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:52.661 12:44:57 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:52.661 12:44:57 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:52.661 12:44:57 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:52.661 12:44:57 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:24:52.661 12:44:57 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:24:52.661 12:44:57 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:52.661 12:44:57 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:52.661 12:44:57 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:52.661 12:44:57 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:52.661 12:44:57 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:52.661 12:44:57 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:24:52.661 12:44:57 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:52.661 12:44:57 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:52.661 12:44:57 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:52.661 12:44:57 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.661 12:44:57 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.661 12:44:57 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.661 12:44:57 keyring_file -- paths/export.sh@5 -- # export PATH 00:24:52.661 12:44:57 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.661 12:44:57 keyring_file -- nvmf/common.sh@51 -- # : 0 00:24:52.661 12:44:57 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:52.661 12:44:57 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:52.661 12:44:57 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:52.661 12:44:57 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:52.661 12:44:57 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:52.661 12:44:57 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:52.662 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:52.662 12:44:57 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:52.662 12:44:57 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:52.662 12:44:57 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:52.921 12:44:57 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:24:52.921 12:44:57 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:24:52.921 12:44:57 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:24:52.921 12:44:57 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:24:52.921 12:44:57 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:24:52.921 12:44:57 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:24:52.921 12:44:57 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:24:52.921 12:44:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:52.921 12:44:57 keyring_file -- keyring/common.sh@17 -- # name=key0 00:24:52.921 12:44:57 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:52.921 12:44:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:52.921 12:44:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:52.921 12:44:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.r4D5GcLCar 00:24:52.921 12:44:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:52.921 12:44:57 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:52.921 12:44:57 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:24:52.921 12:44:57 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:24:52.921 12:44:57 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:24:52.921 12:44:57 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:24:52.921 12:44:57 keyring_file -- nvmf/common.sh@729 -- # python - 00:24:52.921 12:44:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.r4D5GcLCar 00:24:52.921 12:44:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.r4D5GcLCar 00:24:52.921 12:44:57 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.r4D5GcLCar 00:24:52.921 12:44:57 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:24:52.921 12:44:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:52.921 12:44:57 keyring_file -- keyring/common.sh@17 -- # name=key1 00:24:52.921 12:44:57 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:24:52.921 12:44:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:52.921 12:44:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:52.921 12:44:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.MmeqLV7kew 00:24:52.921 12:44:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:24:52.921 12:44:57 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:24:52.921 12:44:57 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:24:52.921 12:44:57 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:24:52.921 12:44:57 keyring_file -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:24:52.921 12:44:57 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:24:52.921 12:44:57 keyring_file -- nvmf/common.sh@729 -- # python - 00:24:52.921 12:44:58 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.MmeqLV7kew 00:24:52.921 12:44:58 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.MmeqLV7kew 00:24:52.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:52.921 12:44:58 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.MmeqLV7kew 00:24:52.921 12:44:58 keyring_file -- keyring/file.sh@30 -- # tgtpid=100524 00:24:52.921 12:44:58 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:52.921 12:44:58 keyring_file -- keyring/file.sh@32 -- # waitforlisten 100524 00:24:52.921 12:44:58 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 100524 ']' 00:24:52.921 12:44:58 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:52.921 12:44:58 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:52.921 12:44:58 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:52.921 12:44:58 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:52.921 12:44:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:52.921 [2024-11-19 12:44:58.112538] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:52.921 [2024-11-19 12:44:58.112811] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100524 ] 00:24:53.180 [2024-11-19 12:44:58.253408] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.180 [2024-11-19 12:44:58.299285] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:53.180 [2024-11-19 12:44:58.346496] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:53.440 12:44:58 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:53.440 12:44:58 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:24:53.440 12:44:58 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:24:53.440 12:44:58 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.440 12:44:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:53.440 [2024-11-19 12:44:58.493795] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:53.440 null0 00:24:53.440 [2024-11-19 12:44:58.525749] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:53.440 [2024-11-19 12:44:58.526043] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:53.440 12:44:58 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.440 12:44:58 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:53.440 12:44:58 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:24:53.440 12:44:58 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:53.440 12:44:58 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:53.440 12:44:58 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:53.440 12:44:58 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:53.440 12:44:58 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:53.440 12:44:58 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:53.440 12:44:58 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.440 12:44:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:53.440 [2024-11-19 12:44:58.553741] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:24:53.440 request: 00:24:53.440 { 00:24:53.440 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:24:53.440 "secure_channel": false, 00:24:53.440 "listen_address": { 00:24:53.440 "trtype": "tcp", 00:24:53.440 "traddr": "127.0.0.1", 00:24:53.440 "trsvcid": "4420" 00:24:53.440 }, 00:24:53.440 "method": "nvmf_subsystem_add_listener", 00:24:53.440 "req_id": 1 00:24:53.440 } 00:24:53.440 Got JSON-RPC error response 00:24:53.440 response: 00:24:53.440 { 00:24:53.440 "code": -32602, 00:24:53.440 "message": "Invalid parameters" 00:24:53.440 } 00:24:53.440 12:44:58 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:53.440 12:44:58 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:24:53.440 12:44:58 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:53.440 12:44:58 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:53.440 12:44:58 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:53.440 12:44:58 keyring_file -- keyring/file.sh@47 -- # bperfpid=100532 00:24:53.440 12:44:58 keyring_file -- keyring/file.sh@49 -- # waitforlisten 100532 /var/tmp/bperf.sock 00:24:53.440 12:44:58 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:24:53.440 12:44:58 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 100532 ']' 00:24:53.440 12:44:58 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:53.440 12:44:58 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:53.440 12:44:58 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:53.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:53.440 12:44:58 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:53.440 12:44:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:53.440 [2024-11-19 12:44:58.622103] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:24:53.440 [2024-11-19 12:44:58.622393] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100532 ] 00:24:53.699 [2024-11-19 12:44:58.763701] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.699 [2024-11-19 12:44:58.804144] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:53.699 [2024-11-19 12:44:58.836463] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:53.699 12:44:58 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:53.699 12:44:58 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:24:53.699 12:44:58 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.r4D5GcLCar 00:24:53.699 12:44:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.r4D5GcLCar 00:24:54.265 12:44:59 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.MmeqLV7kew 00:24:54.265 12:44:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.MmeqLV7kew 00:24:54.265 12:44:59 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:24:54.265 12:44:59 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:24:54.265 12:44:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:54.265 12:44:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:54.265 12:44:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:54.523 12:44:59 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.r4D5GcLCar == \/\t\m\p\/\t\m\p\.\r\4\D\5\G\c\L\C\a\r ]] 00:24:54.523 12:44:59 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:24:54.781 12:44:59 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:24:54.781 12:44:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:54.781 12:44:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:54.781 12:44:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:55.039 12:45:00 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.MmeqLV7kew == \/\t\m\p\/\t\m\p\.\M\m\e\q\L\V\7\k\e\w ]] 00:24:55.039 12:45:00 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:24:55.039 12:45:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:55.039 12:45:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:55.039 12:45:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:55.039 12:45:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:55.039 12:45:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:55.297 12:45:00 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:24:55.297 12:45:00 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:24:55.297 12:45:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:55.297 12:45:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:55.297 12:45:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:55.297 12:45:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:55.297 12:45:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:55.555 12:45:00 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:24:55.555 12:45:00 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:55.555 12:45:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:55.555 [2024-11-19 12:45:00.774338] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:55.813 nvme0n1 00:24:55.813 12:45:00 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:24:55.813 12:45:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:55.813 12:45:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:55.813 12:45:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:55.813 12:45:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:55.813 12:45:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:56.071 12:45:01 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:24:56.071 12:45:01 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:24:56.071 12:45:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:56.071 12:45:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:56.071 12:45:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:56.071 12:45:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:56.071 12:45:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:56.329 12:45:01 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:24:56.329 12:45:01 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:56.329 Running I/O for 1 seconds... 00:24:57.706 12766.00 IOPS, 49.87 MiB/s 00:24:57.706 Latency(us) 00:24:57.706 [2024-11-19T12:45:02.966Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:57.706 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:24:57.706 nvme0n1 : 1.01 12820.96 50.08 0.00 0.00 9958.58 3902.37 16562.73 00:24:57.706 [2024-11-19T12:45:02.966Z] =================================================================================================================== 00:24:57.706 [2024-11-19T12:45:02.966Z] Total : 12820.96 50.08 0.00 0.00 9958.58 3902.37 16562.73 00:24:57.706 { 00:24:57.706 "results": [ 00:24:57.706 { 00:24:57.706 "job": "nvme0n1", 00:24:57.706 "core_mask": "0x2", 00:24:57.706 "workload": "randrw", 00:24:57.706 "percentage": 50, 00:24:57.706 "status": "finished", 00:24:57.706 "queue_depth": 128, 00:24:57.706 "io_size": 4096, 00:24:57.706 "runtime": 1.005853, 00:24:57.706 "iops": 12820.95892739794, 00:24:57.706 "mibps": 50.0818708101482, 00:24:57.706 "io_failed": 0, 00:24:57.706 "io_timeout": 0, 00:24:57.706 "avg_latency_us": 9958.581330927138, 00:24:57.706 "min_latency_us": 3902.370909090909, 00:24:57.706 "max_latency_us": 16562.734545454547 00:24:57.706 } 00:24:57.706 ], 00:24:57.706 "core_count": 1 00:24:57.706 } 00:24:57.706 12:45:02 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:57.706 12:45:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:57.706 12:45:02 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:24:57.706 12:45:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:57.706 12:45:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:57.706 12:45:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:57.706 12:45:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:57.706 12:45:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:57.965 12:45:03 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:24:57.965 12:45:03 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:24:57.965 12:45:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:57.965 12:45:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:57.965 12:45:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:57.965 12:45:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:57.965 12:45:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:58.223 12:45:03 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:24:58.223 12:45:03 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:58.223 12:45:03 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:24:58.223 12:45:03 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:58.223 12:45:03 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:24:58.223 12:45:03 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:58.223 12:45:03 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:24:58.223 12:45:03 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:58.224 12:45:03 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:58.224 12:45:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:58.482 [2024-11-19 12:45:03.680298] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:58.482 [2024-11-19 12:45:03.680926] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1165b20 (107): Transport endpoint is not connected 00:24:58.482 [2024-11-19 12:45:03.681916] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1165b20 (9): Bad file descriptor 00:24:58.482 [2024-11-19 12:45:03.682913] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:58.482 [2024-11-19 12:45:03.682935] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:24:58.482 [2024-11-19 12:45:03.682946] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:24:58.482 [2024-11-19 12:45:03.682957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:58.482 request: 00:24:58.482 { 00:24:58.482 "name": "nvme0", 00:24:58.482 "trtype": "tcp", 00:24:58.482 "traddr": "127.0.0.1", 00:24:58.482 "adrfam": "ipv4", 00:24:58.482 "trsvcid": "4420", 00:24:58.482 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:58.482 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:58.482 "prchk_reftag": false, 00:24:58.482 "prchk_guard": false, 00:24:58.482 "hdgst": false, 00:24:58.482 "ddgst": false, 00:24:58.482 "psk": "key1", 00:24:58.482 "allow_unrecognized_csi": false, 00:24:58.482 "method": "bdev_nvme_attach_controller", 00:24:58.482 "req_id": 1 00:24:58.482 } 00:24:58.482 Got JSON-RPC error response 00:24:58.482 response: 00:24:58.482 { 00:24:58.482 "code": -5, 00:24:58.482 "message": "Input/output error" 00:24:58.482 } 00:24:58.482 12:45:03 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:24:58.482 12:45:03 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:58.482 12:45:03 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:58.482 12:45:03 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:58.482 12:45:03 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:24:58.482 12:45:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:58.482 12:45:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:58.482 12:45:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:58.482 12:45:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:58.482 12:45:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:58.741 12:45:03 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:24:58.741 12:45:03 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:24:58.741 12:45:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:58.741 12:45:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:58.741 12:45:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:58.741 12:45:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:58.741 12:45:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:58.999 12:45:04 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:24:58.999 12:45:04 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:24:58.999 12:45:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:59.257 12:45:04 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:24:59.257 12:45:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:24:59.515 12:45:04 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:24:59.516 12:45:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:59.516 12:45:04 keyring_file -- keyring/file.sh@78 -- # jq length 00:24:59.774 12:45:04 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:24:59.774 12:45:04 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.r4D5GcLCar 00:24:59.774 12:45:04 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.r4D5GcLCar 00:24:59.774 12:45:04 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:24:59.774 12:45:04 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.r4D5GcLCar 00:24:59.774 12:45:04 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:24:59.774 12:45:04 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:59.774 12:45:04 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:24:59.774 12:45:04 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:59.774 12:45:04 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.r4D5GcLCar 00:24:59.774 12:45:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.r4D5GcLCar 00:25:00.032 [2024-11-19 12:45:05.180301] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.r4D5GcLCar': 0100660 00:25:00.032 [2024-11-19 12:45:05.180333] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:25:00.032 request: 00:25:00.032 { 00:25:00.032 "name": "key0", 00:25:00.032 "path": "/tmp/tmp.r4D5GcLCar", 00:25:00.032 "method": "keyring_file_add_key", 00:25:00.032 "req_id": 1 00:25:00.032 } 00:25:00.032 Got JSON-RPC error response 00:25:00.032 response: 00:25:00.032 { 00:25:00.032 "code": -1, 00:25:00.032 "message": "Operation not permitted" 00:25:00.032 } 00:25:00.032 12:45:05 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:25:00.032 12:45:05 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:00.032 12:45:05 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:00.032 12:45:05 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:00.032 12:45:05 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.r4D5GcLCar 00:25:00.032 12:45:05 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.r4D5GcLCar 00:25:00.032 12:45:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.r4D5GcLCar 00:25:00.290 12:45:05 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.r4D5GcLCar 00:25:00.290 12:45:05 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:25:00.290 12:45:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:00.290 12:45:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:00.290 12:45:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:00.290 12:45:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:00.290 12:45:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:00.549 12:45:05 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:25:00.549 12:45:05 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:00.549 12:45:05 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:25:00.549 12:45:05 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:00.549 12:45:05 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:25:00.549 12:45:05 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:00.549 12:45:05 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:25:00.549 12:45:05 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:00.549 12:45:05 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:00.549 12:45:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:00.808 [2024-11-19 12:45:05.924457] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.r4D5GcLCar': No such file or directory 00:25:00.808 [2024-11-19 12:45:05.924490] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:25:00.808 [2024-11-19 12:45:05.924524] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:25:00.808 [2024-11-19 12:45:05.924532] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:25:00.808 [2024-11-19 12:45:05.924540] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:00.808 [2024-11-19 12:45:05.924547] bdev_nvme.c:6447:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:25:00.808 request: 00:25:00.808 { 00:25:00.808 "name": "nvme0", 00:25:00.808 "trtype": "tcp", 00:25:00.808 "traddr": "127.0.0.1", 00:25:00.808 "adrfam": "ipv4", 00:25:00.808 "trsvcid": "4420", 00:25:00.808 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:00.808 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:00.808 "prchk_reftag": false, 00:25:00.808 "prchk_guard": false, 00:25:00.808 "hdgst": false, 00:25:00.808 "ddgst": false, 00:25:00.808 "psk": "key0", 00:25:00.808 "allow_unrecognized_csi": false, 00:25:00.808 "method": "bdev_nvme_attach_controller", 00:25:00.808 "req_id": 1 00:25:00.808 } 00:25:00.808 Got JSON-RPC error response 00:25:00.808 response: 00:25:00.808 { 00:25:00.808 "code": -19, 00:25:00.808 "message": "No such device" 00:25:00.808 } 00:25:00.808 12:45:05 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:25:00.808 12:45:05 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:00.808 12:45:05 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:00.808 12:45:05 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:00.808 12:45:05 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:25:00.808 12:45:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:01.066 12:45:06 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:25:01.066 12:45:06 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:25:01.066 12:45:06 keyring_file -- keyring/common.sh@17 -- # name=key0 00:25:01.066 12:45:06 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:25:01.067 12:45:06 keyring_file -- keyring/common.sh@17 -- # digest=0 00:25:01.067 12:45:06 keyring_file -- keyring/common.sh@18 -- # mktemp 00:25:01.067 12:45:06 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.LEj4r4MwOl 00:25:01.067 12:45:06 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:25:01.067 12:45:06 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:25:01.067 12:45:06 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:25:01.067 12:45:06 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:25:01.067 12:45:06 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:25:01.067 12:45:06 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:25:01.067 12:45:06 keyring_file -- nvmf/common.sh@729 -- # python - 00:25:01.067 12:45:06 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.LEj4r4MwOl 00:25:01.067 12:45:06 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.LEj4r4MwOl 00:25:01.067 12:45:06 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.LEj4r4MwOl 00:25:01.067 12:45:06 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.LEj4r4MwOl 00:25:01.067 12:45:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.LEj4r4MwOl 00:25:01.325 12:45:06 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:01.326 12:45:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:01.584 nvme0n1 00:25:01.584 12:45:06 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:25:01.584 12:45:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:01.584 12:45:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:01.584 12:45:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:01.584 12:45:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:01.584 12:45:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:01.842 12:45:07 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:25:01.842 12:45:07 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:25:01.842 12:45:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:02.101 12:45:07 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:25:02.101 12:45:07 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:25:02.101 12:45:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:02.101 12:45:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:02.101 12:45:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:02.358 12:45:07 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:25:02.358 12:45:07 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:25:02.358 12:45:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:02.358 12:45:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:02.358 12:45:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:02.358 12:45:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:02.358 12:45:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:02.616 12:45:07 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:25:02.616 12:45:07 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:02.616 12:45:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:02.874 12:45:08 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:25:02.874 12:45:08 keyring_file -- keyring/file.sh@105 -- # jq length 00:25:02.874 12:45:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:03.131 12:45:08 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:25:03.131 12:45:08 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.LEj4r4MwOl 00:25:03.131 12:45:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.LEj4r4MwOl 00:25:03.389 12:45:08 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.MmeqLV7kew 00:25:03.389 12:45:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.MmeqLV7kew 00:25:03.647 12:45:08 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:03.647 12:45:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:03.905 nvme0n1 00:25:03.905 12:45:09 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:25:03.905 12:45:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:25:04.471 12:45:09 keyring_file -- keyring/file.sh@113 -- # config='{ 00:25:04.471 "subsystems": [ 00:25:04.471 { 00:25:04.471 "subsystem": "keyring", 00:25:04.471 "config": [ 00:25:04.471 { 00:25:04.471 "method": "keyring_file_add_key", 00:25:04.471 "params": { 00:25:04.471 "name": "key0", 00:25:04.471 "path": "/tmp/tmp.LEj4r4MwOl" 00:25:04.471 } 00:25:04.471 }, 00:25:04.471 { 00:25:04.471 "method": "keyring_file_add_key", 00:25:04.471 "params": { 00:25:04.471 "name": "key1", 00:25:04.471 "path": "/tmp/tmp.MmeqLV7kew" 00:25:04.471 } 00:25:04.471 } 00:25:04.471 ] 00:25:04.471 }, 00:25:04.471 { 00:25:04.471 "subsystem": "iobuf", 00:25:04.471 "config": [ 00:25:04.471 { 00:25:04.471 "method": "iobuf_set_options", 00:25:04.471 "params": { 00:25:04.471 "small_pool_count": 8192, 00:25:04.471 "large_pool_count": 1024, 00:25:04.471 "small_bufsize": 8192, 00:25:04.471 "large_bufsize": 135168 00:25:04.471 } 00:25:04.471 } 00:25:04.471 ] 00:25:04.471 }, 00:25:04.471 { 00:25:04.471 "subsystem": "sock", 00:25:04.471 "config": [ 00:25:04.471 { 00:25:04.471 "method": "sock_set_default_impl", 00:25:04.471 "params": { 00:25:04.471 "impl_name": "uring" 00:25:04.471 } 00:25:04.471 }, 00:25:04.471 { 00:25:04.471 "method": "sock_impl_set_options", 00:25:04.471 "params": { 00:25:04.471 "impl_name": "ssl", 00:25:04.471 "recv_buf_size": 4096, 00:25:04.471 "send_buf_size": 4096, 00:25:04.471 "enable_recv_pipe": true, 00:25:04.471 "enable_quickack": false, 00:25:04.471 "enable_placement_id": 0, 00:25:04.471 "enable_zerocopy_send_server": true, 00:25:04.471 "enable_zerocopy_send_client": false, 00:25:04.471 "zerocopy_threshold": 0, 00:25:04.471 "tls_version": 0, 00:25:04.471 "enable_ktls": false 00:25:04.471 } 00:25:04.471 }, 00:25:04.471 { 00:25:04.471 "method": "sock_impl_set_options", 00:25:04.471 "params": { 00:25:04.471 "impl_name": "posix", 00:25:04.471 "recv_buf_size": 2097152, 00:25:04.471 "send_buf_size": 2097152, 00:25:04.471 "enable_recv_pipe": true, 00:25:04.471 "enable_quickack": false, 00:25:04.471 "enable_placement_id": 0, 00:25:04.471 "enable_zerocopy_send_server": true, 00:25:04.471 "enable_zerocopy_send_client": false, 00:25:04.471 "zerocopy_threshold": 0, 00:25:04.471 "tls_version": 0, 00:25:04.472 "enable_ktls": false 00:25:04.472 } 00:25:04.472 }, 00:25:04.472 { 00:25:04.472 "method": "sock_impl_set_options", 00:25:04.472 "params": { 00:25:04.472 "impl_name": "uring", 00:25:04.472 "recv_buf_size": 2097152, 00:25:04.472 "send_buf_size": 2097152, 00:25:04.472 "enable_recv_pipe": true, 00:25:04.472 "enable_quickack": false, 00:25:04.472 "enable_placement_id": 0, 00:25:04.472 "enable_zerocopy_send_server": false, 00:25:04.472 "enable_zerocopy_send_client": false, 00:25:04.472 "zerocopy_threshold": 0, 00:25:04.472 "tls_version": 0, 00:25:04.472 "enable_ktls": false 00:25:04.472 } 00:25:04.472 } 00:25:04.472 ] 00:25:04.472 }, 00:25:04.472 { 00:25:04.472 "subsystem": "vmd", 00:25:04.472 "config": [] 00:25:04.472 }, 00:25:04.472 { 00:25:04.472 "subsystem": "accel", 00:25:04.472 "config": [ 00:25:04.472 { 00:25:04.472 "method": "accel_set_options", 00:25:04.472 "params": { 00:25:04.472 "small_cache_size": 128, 00:25:04.472 "large_cache_size": 16, 00:25:04.472 "task_count": 2048, 00:25:04.472 "sequence_count": 2048, 00:25:04.472 "buf_count": 2048 00:25:04.472 } 00:25:04.472 } 00:25:04.472 ] 00:25:04.472 }, 00:25:04.472 { 00:25:04.472 "subsystem": "bdev", 00:25:04.472 "config": [ 00:25:04.472 { 00:25:04.472 "method": "bdev_set_options", 00:25:04.472 "params": { 00:25:04.472 "bdev_io_pool_size": 65535, 00:25:04.472 "bdev_io_cache_size": 256, 00:25:04.472 "bdev_auto_examine": true, 00:25:04.472 "iobuf_small_cache_size": 128, 00:25:04.472 "iobuf_large_cache_size": 16 00:25:04.472 } 00:25:04.472 }, 00:25:04.472 { 00:25:04.472 "method": "bdev_raid_set_options", 00:25:04.472 "params": { 00:25:04.472 "process_window_size_kb": 1024, 00:25:04.472 "process_max_bandwidth_mb_sec": 0 00:25:04.472 } 00:25:04.472 }, 00:25:04.472 { 00:25:04.472 "method": "bdev_iscsi_set_options", 00:25:04.472 "params": { 00:25:04.472 "timeout_sec": 30 00:25:04.472 } 00:25:04.472 }, 00:25:04.472 { 00:25:04.472 "method": "bdev_nvme_set_options", 00:25:04.472 "params": { 00:25:04.472 "action_on_timeout": "none", 00:25:04.472 "timeout_us": 0, 00:25:04.472 "timeout_admin_us": 0, 00:25:04.472 "keep_alive_timeout_ms": 10000, 00:25:04.472 "arbitration_burst": 0, 00:25:04.472 "low_priority_weight": 0, 00:25:04.472 "medium_priority_weight": 0, 00:25:04.472 "high_priority_weight": 0, 00:25:04.472 "nvme_adminq_poll_period_us": 10000, 00:25:04.472 "nvme_ioq_poll_period_us": 0, 00:25:04.472 "io_queue_requests": 512, 00:25:04.472 "delay_cmd_submit": true, 00:25:04.472 "transport_retry_count": 4, 00:25:04.472 "bdev_retry_count": 3, 00:25:04.472 "transport_ack_timeout": 0, 00:25:04.472 "ctrlr_loss_timeout_sec": 0, 00:25:04.472 "reconnect_delay_sec": 0, 00:25:04.472 "fast_io_fail_timeout_sec": 0, 00:25:04.472 "disable_auto_failback": false, 00:25:04.472 "generate_uuids": false, 00:25:04.472 "transport_tos": 0, 00:25:04.472 "nvme_error_stat": false, 00:25:04.472 "rdma_srq_size": 0, 00:25:04.472 "io_path_stat": false, 00:25:04.472 "allow_accel_sequence": false, 00:25:04.472 "rdma_max_cq_size": 0, 00:25:04.472 "rdma_cm_event_timeout_ms": 0, 00:25:04.472 "dhchap_digests": [ 00:25:04.472 "sha256", 00:25:04.472 "sha384", 00:25:04.472 "sha512" 00:25:04.472 ], 00:25:04.472 "dhchap_dhgroups": [ 00:25:04.472 "null", 00:25:04.472 "ffdhe2048", 00:25:04.472 "ffdhe3072", 00:25:04.472 "ffdhe4096", 00:25:04.472 "ffdhe6144", 00:25:04.472 "ffdhe8192" 00:25:04.472 ] 00:25:04.472 } 00:25:04.472 }, 00:25:04.472 { 00:25:04.472 "method": "bdev_nvme_attach_controller", 00:25:04.472 "params": { 00:25:04.472 "name": "nvme0", 00:25:04.472 "trtype": "TCP", 00:25:04.472 "adrfam": "IPv4", 00:25:04.472 "traddr": "127.0.0.1", 00:25:04.472 "trsvcid": "4420", 00:25:04.472 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:04.472 "prchk_reftag": false, 00:25:04.472 "prchk_guard": false, 00:25:04.472 "ctrlr_loss_timeout_sec": 0, 00:25:04.472 "reconnect_delay_sec": 0, 00:25:04.472 "fast_io_fail_timeout_sec": 0, 00:25:04.472 "psk": "key0", 00:25:04.472 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:04.472 "hdgst": false, 00:25:04.472 "ddgst": false 00:25:04.472 } 00:25:04.472 }, 00:25:04.472 { 00:25:04.472 "method": "bdev_nvme_set_hotplug", 00:25:04.472 "params": { 00:25:04.472 "period_us": 100000, 00:25:04.472 "enable": false 00:25:04.472 } 00:25:04.472 }, 00:25:04.472 { 00:25:04.472 "method": "bdev_wait_for_examine" 00:25:04.472 } 00:25:04.472 ] 00:25:04.472 }, 00:25:04.472 { 00:25:04.472 "subsystem": "nbd", 00:25:04.472 "config": [] 00:25:04.472 } 00:25:04.472 ] 00:25:04.472 }' 00:25:04.472 12:45:09 keyring_file -- keyring/file.sh@115 -- # killprocess 100532 00:25:04.472 12:45:09 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 100532 ']' 00:25:04.472 12:45:09 keyring_file -- common/autotest_common.sh@954 -- # kill -0 100532 00:25:04.472 12:45:09 keyring_file -- common/autotest_common.sh@955 -- # uname 00:25:04.472 12:45:09 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:04.472 12:45:09 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100532 00:25:04.472 killing process with pid 100532 00:25:04.472 Received shutdown signal, test time was about 1.000000 seconds 00:25:04.472 00:25:04.472 Latency(us) 00:25:04.472 [2024-11-19T12:45:09.732Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:04.472 [2024-11-19T12:45:09.732Z] =================================================================================================================== 00:25:04.472 [2024-11-19T12:45:09.732Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:04.472 12:45:09 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:04.472 12:45:09 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:04.472 12:45:09 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100532' 00:25:04.472 12:45:09 keyring_file -- common/autotest_common.sh@969 -- # kill 100532 00:25:04.472 12:45:09 keyring_file -- common/autotest_common.sh@974 -- # wait 100532 00:25:04.472 12:45:09 keyring_file -- keyring/file.sh@118 -- # bperfpid=100772 00:25:04.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:04.472 12:45:09 keyring_file -- keyring/file.sh@120 -- # waitforlisten 100772 /var/tmp/bperf.sock 00:25:04.472 12:45:09 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:25:04.472 12:45:09 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 100772 ']' 00:25:04.472 12:45:09 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:04.472 12:45:09 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:25:04.472 "subsystems": [ 00:25:04.472 { 00:25:04.472 "subsystem": "keyring", 00:25:04.472 "config": [ 00:25:04.472 { 00:25:04.472 "method": "keyring_file_add_key", 00:25:04.472 "params": { 00:25:04.472 "name": "key0", 00:25:04.472 "path": "/tmp/tmp.LEj4r4MwOl" 00:25:04.472 } 00:25:04.472 }, 00:25:04.472 { 00:25:04.472 "method": "keyring_file_add_key", 00:25:04.472 "params": { 00:25:04.472 "name": "key1", 00:25:04.472 "path": "/tmp/tmp.MmeqLV7kew" 00:25:04.472 } 00:25:04.472 } 00:25:04.472 ] 00:25:04.472 }, 00:25:04.472 { 00:25:04.472 "subsystem": "iobuf", 00:25:04.472 "config": [ 00:25:04.472 { 00:25:04.472 "method": "iobuf_set_options", 00:25:04.472 "params": { 00:25:04.472 "small_pool_count": 8192, 00:25:04.472 "large_pool_count": 1024, 00:25:04.472 "small_bufsize": 8192, 00:25:04.472 "large_bufsize": 135168 00:25:04.472 } 00:25:04.472 } 00:25:04.472 ] 00:25:04.472 }, 00:25:04.472 { 00:25:04.472 "subsystem": "sock", 00:25:04.472 "config": [ 00:25:04.472 { 00:25:04.472 "method": "sock_set_default_impl", 00:25:04.472 "params": { 00:25:04.472 "impl_name": "uring" 00:25:04.472 } 00:25:04.472 }, 00:25:04.472 { 00:25:04.472 "method": "sock_impl_set_options", 00:25:04.472 "params": { 00:25:04.472 "impl_name": "ssl", 00:25:04.472 "recv_buf_size": 4096, 00:25:04.472 "send_buf_size": 4096, 00:25:04.473 "enable_recv_pipe": true, 00:25:04.473 "enable_quickack": false, 00:25:04.473 "enable_placement_id": 0, 00:25:04.473 "enable_zerocopy_send_server": true, 00:25:04.473 "enable_zerocopy_send_client": false, 00:25:04.473 "zerocopy_threshold": 0, 00:25:04.473 "tls_version": 0, 00:25:04.473 "enable_ktls": false 00:25:04.473 } 00:25:04.473 }, 00:25:04.473 { 00:25:04.473 "method": "sock_impl_set_options", 00:25:04.473 "params": { 00:25:04.473 "impl_name": "posix", 00:25:04.473 "recv_buf_size": 2097152, 00:25:04.473 "send_buf_size": 2097152, 00:25:04.473 "enable_recv_pipe": true, 00:25:04.473 "enable_quickack": false, 00:25:04.473 "enable_placement_id": 0, 00:25:04.473 "enable_zerocopy_send_server": true, 00:25:04.473 "enable_zerocopy_send_client": false, 00:25:04.473 "zerocopy_threshold": 0, 00:25:04.473 "tls_version": 0, 00:25:04.473 "enable_ktls": false 00:25:04.473 } 00:25:04.473 }, 00:25:04.473 { 00:25:04.473 "method": "sock_impl_set_options", 00:25:04.473 "params": { 00:25:04.473 "impl_name": "uring", 00:25:04.473 "recv_buf_size": 2097152, 00:25:04.473 "send_buf_size": 2097152, 00:25:04.473 "enable_recv_pipe": true, 00:25:04.473 "enable_quickack": false, 00:25:04.473 "enable_placement_id": 0, 00:25:04.473 "enable_zerocopy_send_server": false, 00:25:04.473 "enable_zerocopy_send_client": false, 00:25:04.473 "zerocopy_threshold": 0, 00:25:04.473 "tls_version": 0, 00:25:04.473 "enable_ktls": false 00:25:04.473 } 00:25:04.473 } 00:25:04.473 ] 00:25:04.473 }, 00:25:04.473 { 00:25:04.473 "subsystem": "vmd", 00:25:04.473 "config": [] 00:25:04.473 }, 00:25:04.473 { 00:25:04.473 "subsystem": "accel", 00:25:04.473 "config": [ 00:25:04.473 { 00:25:04.473 "method": "accel_set_options", 00:25:04.473 "params": { 00:25:04.473 "small_cache_size": 128, 00:25:04.473 "large_cache_size": 16, 00:25:04.473 "task_count": 2048, 00:25:04.473 "sequence_count": 2048, 00:25:04.473 "buf_count": 2048 00:25:04.473 } 00:25:04.473 } 00:25:04.473 ] 00:25:04.473 }, 00:25:04.473 { 00:25:04.473 "subsystem": "bdev", 00:25:04.473 "config": [ 00:25:04.473 { 00:25:04.473 "method": "bdev_set_options", 00:25:04.473 "params": { 00:25:04.473 "bdev_io_pool_size": 65535, 00:25:04.473 "bdev_io_cache_size": 256, 00:25:04.473 "bdev_auto_examine": true, 00:25:04.473 "iobuf_small_cache_size": 128, 00:25:04.473 "iobuf_large_cache_size": 16 00:25:04.473 } 00:25:04.473 }, 00:25:04.473 { 00:25:04.473 "method": "bdev_raid_set_options", 00:25:04.473 "params": { 00:25:04.473 "process_window_size_kb": 1024, 00:25:04.473 "process_max_bandwidth_mb_sec": 0 00:25:04.473 } 00:25:04.473 }, 00:25:04.473 { 00:25:04.473 "method": "bdev_iscsi_set_options", 00:25:04.473 "params": { 00:25:04.473 "timeout_sec": 30 00:25:04.473 } 00:25:04.473 }, 00:25:04.473 { 00:25:04.473 "method": "bdev_nvme_set_options", 00:25:04.473 "params": { 00:25:04.473 "action_on_timeout": "none", 00:25:04.473 "timeout_us": 0, 00:25:04.473 "timeout_admin_us": 0, 00:25:04.473 "keep_alive_timeout_ms": 10000, 00:25:04.473 "arbitration_burst": 0, 00:25:04.473 "low_priority_weight": 0, 00:25:04.473 "medium_priority_weight": 0, 00:25:04.473 "high_priority_weight": 0, 00:25:04.473 "nvme_adminq_poll_period_us": 10000, 00:25:04.473 "nvme_ioq_poll_period_us": 0, 00:25:04.473 "io_queue_requests": 512, 00:25:04.473 "delay_cmd_submit": true, 00:25:04.473 "transport_retry_count": 4, 00:25:04.473 "bdev_retry_count": 3, 00:25:04.473 "transport_ack_timeout": 0, 00:25:04.473 "ctrlr_loss_timeout_sec": 0, 00:25:04.473 "reconnect_delay_sec": 0, 00:25:04.473 "fast_io_fail_timeout_sec": 0, 00:25:04.473 "disable_auto_failback": false, 00:25:04.473 "generate_uuids": false, 00:25:04.473 "transport_tos": 0, 00:25:04.473 "nvme_error_stat": false, 00:25:04.473 "rdma_srq_size": 0, 00:25:04.473 "io_path_stat": false, 00:25:04.473 "allow_accel_sequence": false, 00:25:04.473 "rdma_max_cq_size": 0, 00:25:04.473 "rdma_cm_event_timeout_ms": 0, 00:25:04.473 "dhchap_digests": [ 00:25:04.473 "sha256", 00:25:04.473 "sha384", 00:25:04.473 "sha512" 00:25:04.473 ], 00:25:04.473 "dhchap_dhgroups": [ 00:25:04.473 "null", 00:25:04.473 "ffdhe2048", 00:25:04.473 "ffdhe3072", 00:25:04.473 "ffdhe4096", 00:25:04.473 "ffdhe6144", 00:25:04.473 "ffdhe8192" 00:25:04.473 ] 00:25:04.473 } 00:25:04.473 }, 00:25:04.473 { 00:25:04.473 "method": "bdev_nvme_attach_controller", 00:25:04.473 "params": { 00:25:04.473 "name": "nvme0", 00:25:04.473 "trtype": "TCP", 00:25:04.473 "adrfam": "IPv4", 00:25:04.473 "traddr": "127.0.0.1", 00:25:04.473 "trsvcid": "4420", 00:25:04.473 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:04.473 "prchk_reftag": false, 00:25:04.473 "prchk_guard": false, 00:25:04.473 "ctrlr_loss_timeout_sec": 0, 00:25:04.473 "reconnect_delay_sec": 0, 00:25:04.473 "fast_io_fail_timeout_sec": 0, 00:25:04.473 "psk": "key0", 00:25:04.473 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:04.473 "hdgst": false, 00:25:04.473 "ddgst": false 00:25:04.473 } 00:25:04.473 }, 00:25:04.473 { 00:25:04.473 "method": "bdev_nvme_set_hotplug", 00:25:04.473 "params": { 00:25:04.473 "period_us": 100000, 00:25:04.473 "enable": false 00:25:04.473 } 00:25:04.473 }, 00:25:04.473 { 00:25:04.473 "method": "bdev_wait_for_examine" 00:25:04.473 } 00:25:04.473 ] 00:25:04.473 }, 00:25:04.473 { 00:25:04.473 "subsystem": "nbd", 00:25:04.473 "config": [] 00:25:04.473 } 00:25:04.473 ] 00:25:04.473 }' 00:25:04.473 12:45:09 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:04.473 12:45:09 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:04.473 12:45:09 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:04.473 12:45:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:04.473 [2024-11-19 12:45:09.680172] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:04.473 [2024-11-19 12:45:09.680455] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100772 ] 00:25:04.732 [2024-11-19 12:45:09.814777] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.732 [2024-11-19 12:45:09.846710] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:04.732 [2024-11-19 12:45:09.953358] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:04.732 [2024-11-19 12:45:09.988556] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:05.667 12:45:10 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:05.667 12:45:10 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:25:05.667 12:45:10 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:25:05.667 12:45:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:05.667 12:45:10 keyring_file -- keyring/file.sh@121 -- # jq length 00:25:05.667 12:45:10 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:25:05.667 12:45:10 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:25:05.667 12:45:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:05.667 12:45:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:05.667 12:45:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:05.667 12:45:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:05.667 12:45:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:05.926 12:45:11 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:25:05.926 12:45:11 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:25:05.926 12:45:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:05.926 12:45:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:05.926 12:45:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:05.926 12:45:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:05.926 12:45:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:06.185 12:45:11 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:25:06.185 12:45:11 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:25:06.185 12:45:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:25:06.185 12:45:11 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:25:06.444 12:45:11 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:25:06.444 12:45:11 keyring_file -- keyring/file.sh@1 -- # cleanup 00:25:06.444 12:45:11 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.LEj4r4MwOl /tmp/tmp.MmeqLV7kew 00:25:06.444 12:45:11 keyring_file -- keyring/file.sh@20 -- # killprocess 100772 00:25:06.444 12:45:11 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 100772 ']' 00:25:06.444 12:45:11 keyring_file -- common/autotest_common.sh@954 -- # kill -0 100772 00:25:06.444 12:45:11 keyring_file -- common/autotest_common.sh@955 -- # uname 00:25:06.444 12:45:11 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:06.444 12:45:11 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100772 00:25:06.444 killing process with pid 100772 00:25:06.444 Received shutdown signal, test time was about 1.000000 seconds 00:25:06.444 00:25:06.444 Latency(us) 00:25:06.444 [2024-11-19T12:45:11.704Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.444 [2024-11-19T12:45:11.704Z] =================================================================================================================== 00:25:06.444 [2024-11-19T12:45:11.704Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:06.444 12:45:11 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:06.444 12:45:11 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:06.444 12:45:11 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100772' 00:25:06.444 12:45:11 keyring_file -- common/autotest_common.sh@969 -- # kill 100772 00:25:06.444 12:45:11 keyring_file -- common/autotest_common.sh@974 -- # wait 100772 00:25:06.702 12:45:11 keyring_file -- keyring/file.sh@21 -- # killprocess 100524 00:25:06.702 12:45:11 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 100524 ']' 00:25:06.702 12:45:11 keyring_file -- common/autotest_common.sh@954 -- # kill -0 100524 00:25:06.702 12:45:11 keyring_file -- common/autotest_common.sh@955 -- # uname 00:25:06.702 12:45:11 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:06.702 12:45:11 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100524 00:25:06.702 killing process with pid 100524 00:25:06.702 12:45:11 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:06.702 12:45:11 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:06.702 12:45:11 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100524' 00:25:06.702 12:45:11 keyring_file -- common/autotest_common.sh@969 -- # kill 100524 00:25:06.702 12:45:11 keyring_file -- common/autotest_common.sh@974 -- # wait 100524 00:25:06.961 ************************************ 00:25:06.961 END TEST keyring_file 00:25:06.961 ************************************ 00:25:06.961 00:25:06.961 real 0m14.319s 00:25:06.961 user 0m37.040s 00:25:06.961 sys 0m2.567s 00:25:06.961 12:45:12 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:06.961 12:45:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:06.961 12:45:12 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:25:06.961 12:45:12 -- spdk/autotest.sh@290 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:25:06.961 12:45:12 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:06.961 12:45:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:06.961 12:45:12 -- common/autotest_common.sh@10 -- # set +x 00:25:06.961 ************************************ 00:25:06.961 START TEST keyring_linux 00:25:06.961 ************************************ 00:25:06.961 12:45:12 keyring_linux -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:25:06.961 Joined session keyring: 32223435 00:25:06.961 * Looking for test storage... 00:25:06.961 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:25:06.961 12:45:12 keyring_linux -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:06.961 12:45:12 keyring_linux -- common/autotest_common.sh@1681 -- # lcov --version 00:25:06.961 12:45:12 keyring_linux -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:07.220 12:45:12 keyring_linux -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:07.220 12:45:12 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:07.220 12:45:12 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:07.220 12:45:12 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:07.220 12:45:12 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:25:07.220 12:45:12 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:25:07.220 12:45:12 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:25:07.220 12:45:12 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:25:07.220 12:45:12 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:25:07.220 12:45:12 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:25:07.220 12:45:12 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:25:07.220 12:45:12 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:07.220 12:45:12 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:25:07.221 12:45:12 keyring_linux -- scripts/common.sh@345 -- # : 1 00:25:07.221 12:45:12 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:07.221 12:45:12 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:07.221 12:45:12 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:25:07.221 12:45:12 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:25:07.221 12:45:12 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:07.221 12:45:12 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:25:07.221 12:45:12 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:25:07.221 12:45:12 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:25:07.221 12:45:12 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:25:07.221 12:45:12 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:07.221 12:45:12 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:25:07.221 12:45:12 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:25:07.221 12:45:12 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:07.221 12:45:12 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:07.221 12:45:12 keyring_linux -- scripts/common.sh@368 -- # return 0 00:25:07.221 12:45:12 keyring_linux -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:07.221 12:45:12 keyring_linux -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:07.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.221 --rc genhtml_branch_coverage=1 00:25:07.221 --rc genhtml_function_coverage=1 00:25:07.221 --rc genhtml_legend=1 00:25:07.221 --rc geninfo_all_blocks=1 00:25:07.221 --rc geninfo_unexecuted_blocks=1 00:25:07.221 00:25:07.221 ' 00:25:07.221 12:45:12 keyring_linux -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:07.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.221 --rc genhtml_branch_coverage=1 00:25:07.221 --rc genhtml_function_coverage=1 00:25:07.221 --rc genhtml_legend=1 00:25:07.221 --rc geninfo_all_blocks=1 00:25:07.221 --rc geninfo_unexecuted_blocks=1 00:25:07.221 00:25:07.221 ' 00:25:07.221 12:45:12 keyring_linux -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:07.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.221 --rc genhtml_branch_coverage=1 00:25:07.221 --rc genhtml_function_coverage=1 00:25:07.221 --rc genhtml_legend=1 00:25:07.221 --rc geninfo_all_blocks=1 00:25:07.221 --rc geninfo_unexecuted_blocks=1 00:25:07.221 00:25:07.221 ' 00:25:07.221 12:45:12 keyring_linux -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:07.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.221 --rc genhtml_branch_coverage=1 00:25:07.221 --rc genhtml_function_coverage=1 00:25:07.221 --rc genhtml_legend=1 00:25:07.221 --rc geninfo_all_blocks=1 00:25:07.221 --rc geninfo_unexecuted_blocks=1 00:25:07.221 00:25:07.221 ' 00:25:07.221 12:45:12 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:25:07.221 12:45:12 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:07.221 12:45:12 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:25:07.221 12:45:12 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:07.221 12:45:12 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:07.221 12:45:12 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:07.221 12:45:12 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:07.221 12:45:12 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:07.221 12:45:12 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:07.221 12:45:12 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:07.221 12:45:12 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:07.221 12:45:12 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:07.221 12:45:12 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:07.221 12:45:12 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:25:07.221 12:45:12 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=bae1b18f-cc14-461e-aa63-e888be1a2cc9 00:25:07.221 12:45:12 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:07.221 12:45:12 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:07.221 12:45:12 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:07.221 12:45:12 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:07.221 12:45:12 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:07.221 12:45:12 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:25:07.221 12:45:12 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:07.221 12:45:12 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:07.221 12:45:12 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:07.221 12:45:12 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.221 12:45:12 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.221 12:45:12 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.221 12:45:12 keyring_linux -- paths/export.sh@5 -- # export PATH 00:25:07.221 12:45:12 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.221 12:45:12 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:25:07.221 12:45:12 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:07.221 12:45:12 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:07.221 12:45:12 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:07.221 12:45:12 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:07.221 12:45:12 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:07.221 12:45:12 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:07.221 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:07.221 12:45:12 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:07.221 12:45:12 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:07.221 12:45:12 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:07.221 12:45:12 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:25:07.221 12:45:12 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:25:07.221 12:45:12 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:25:07.221 12:45:12 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:25:07.221 12:45:12 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:25:07.221 12:45:12 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:25:07.221 12:45:12 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:25:07.221 12:45:12 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:25:07.221 12:45:12 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:25:07.221 12:45:12 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:25:07.221 12:45:12 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:25:07.221 12:45:12 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:25:07.221 12:45:12 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:25:07.221 12:45:12 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:25:07.221 12:45:12 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:25:07.221 12:45:12 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:25:07.221 12:45:12 keyring_linux -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:25:07.221 12:45:12 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:25:07.221 12:45:12 keyring_linux -- nvmf/common.sh@729 -- # python - 00:25:07.221 12:45:12 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:25:07.221 /tmp/:spdk-test:key0 00:25:07.221 12:45:12 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:25:07.221 12:45:12 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:25:07.221 12:45:12 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:25:07.221 12:45:12 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:25:07.221 12:45:12 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:25:07.221 12:45:12 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:25:07.221 12:45:12 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:25:07.221 12:45:12 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:25:07.221 12:45:12 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:25:07.221 12:45:12 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:25:07.221 12:45:12 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:25:07.221 12:45:12 keyring_linux -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:25:07.221 12:45:12 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:25:07.221 12:45:12 keyring_linux -- nvmf/common.sh@729 -- # python - 00:25:07.221 12:45:12 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:25:07.221 /tmp/:spdk-test:key1 00:25:07.221 12:45:12 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:25:07.221 12:45:12 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=100897 00:25:07.221 12:45:12 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 100897 00:25:07.221 12:45:12 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 100897 ']' 00:25:07.222 12:45:12 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:07.222 12:45:12 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:07.222 12:45:12 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:07.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:07.222 12:45:12 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:07.222 12:45:12 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:07.222 12:45:12 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:07.222 [2024-11-19 12:45:12.435174] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:07.222 [2024-11-19 12:45:12.435281] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100897 ] 00:25:07.494 [2024-11-19 12:45:12.572936] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:07.494 [2024-11-19 12:45:12.608027] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:07.494 [2024-11-19 12:45:12.643203] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:07.756 12:45:12 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:07.756 12:45:12 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:25:07.756 12:45:12 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:25:07.756 12:45:12 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.756 12:45:12 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:07.756 [2024-11-19 12:45:12.755837] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:07.756 null0 00:25:07.756 [2024-11-19 12:45:12.787759] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:07.756 [2024-11-19 12:45:12.787948] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:07.756 12:45:12 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.756 12:45:12 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:25:07.756 384137051 00:25:07.756 12:45:12 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:25:07.756 377413947 00:25:07.756 12:45:12 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=100906 00:25:07.756 12:45:12 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 100906 /var/tmp/bperf.sock 00:25:07.756 12:45:12 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 100906 ']' 00:25:07.756 12:45:12 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:07.756 12:45:12 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:07.756 12:45:12 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:07.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:07.756 12:45:12 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:07.756 12:45:12 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:07.756 12:45:12 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:25:07.756 [2024-11-19 12:45:12.874598] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:25:07.756 [2024-11-19 12:45:12.874705] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100906 ] 00:25:08.014 [2024-11-19 12:45:13.015932] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.014 [2024-11-19 12:45:13.056715] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:08.014 12:45:13 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:08.014 12:45:13 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:25:08.014 12:45:13 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:25:08.014 12:45:13 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:25:08.272 12:45:13 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:25:08.272 12:45:13 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:08.531 [2024-11-19 12:45:13.538963] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:08.531 12:45:13 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:25:08.531 12:45:13 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:25:08.531 [2024-11-19 12:45:13.773265] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:08.789 nvme0n1 00:25:08.789 12:45:13 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:25:08.789 12:45:13 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:25:08.789 12:45:13 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:25:08.789 12:45:13 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:25:08.789 12:45:13 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:08.789 12:45:13 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:25:09.048 12:45:14 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:25:09.048 12:45:14 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:25:09.048 12:45:14 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:25:09.048 12:45:14 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:25:09.048 12:45:14 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:25:09.048 12:45:14 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:09.048 12:45:14 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:09.307 12:45:14 keyring_linux -- keyring/linux.sh@25 -- # sn=384137051 00:25:09.307 12:45:14 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:25:09.307 12:45:14 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:25:09.307 12:45:14 keyring_linux -- keyring/linux.sh@26 -- # [[ 384137051 == \3\8\4\1\3\7\0\5\1 ]] 00:25:09.307 12:45:14 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 384137051 00:25:09.307 12:45:14 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:25:09.307 12:45:14 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:09.307 Running I/O for 1 seconds... 00:25:10.317 13188.00 IOPS, 51.52 MiB/s 00:25:10.317 Latency(us) 00:25:10.317 [2024-11-19T12:45:15.577Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:10.317 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:10.317 nvme0n1 : 1.01 13195.85 51.55 0.00 0.00 9650.63 7685.59 17277.67 00:25:10.317 [2024-11-19T12:45:15.577Z] =================================================================================================================== 00:25:10.317 [2024-11-19T12:45:15.577Z] Total : 13195.85 51.55 0.00 0.00 9650.63 7685.59 17277.67 00:25:10.317 { 00:25:10.317 "results": [ 00:25:10.317 { 00:25:10.317 "job": "nvme0n1", 00:25:10.317 "core_mask": "0x2", 00:25:10.317 "workload": "randread", 00:25:10.317 "status": "finished", 00:25:10.317 "queue_depth": 128, 00:25:10.317 "io_size": 4096, 00:25:10.317 "runtime": 1.009257, 00:25:10.317 "iops": 13195.846053086578, 00:25:10.317 "mibps": 51.546273644869444, 00:25:10.317 "io_failed": 0, 00:25:10.317 "io_timeout": 0, 00:25:10.317 "avg_latency_us": 9650.625363895753, 00:25:10.317 "min_latency_us": 7685.585454545455, 00:25:10.317 "max_latency_us": 17277.672727272726 00:25:10.317 } 00:25:10.317 ], 00:25:10.317 "core_count": 1 00:25:10.317 } 00:25:10.317 12:45:15 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:10.317 12:45:15 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:10.885 12:45:15 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:25:10.885 12:45:15 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:25:10.885 12:45:15 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:25:10.885 12:45:15 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:25:10.885 12:45:15 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:10.885 12:45:15 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:25:11.144 12:45:16 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:25:11.144 12:45:16 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:25:11.144 12:45:16 keyring_linux -- keyring/linux.sh@23 -- # return 00:25:11.144 12:45:16 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:11.144 12:45:16 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:25:11.144 12:45:16 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:11.144 12:45:16 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:25:11.144 12:45:16 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:11.144 12:45:16 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:25:11.144 12:45:16 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:11.144 12:45:16 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:11.144 12:45:16 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:11.144 [2024-11-19 12:45:16.360687] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:11.144 [2024-11-19 12:45:16.361392] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17137a0 (107): Transport endpoint is not connected 00:25:11.144 [2024-11-19 12:45:16.362382] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17137a0 (9): Bad file descriptor 00:25:11.144 [2024-11-19 12:45:16.363379] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:11.144 [2024-11-19 12:45:16.363439] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:25:11.144 [2024-11-19 12:45:16.363450] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:25:11.144 [2024-11-19 12:45:16.363462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:11.144 request: 00:25:11.144 { 00:25:11.144 "name": "nvme0", 00:25:11.144 "trtype": "tcp", 00:25:11.144 "traddr": "127.0.0.1", 00:25:11.144 "adrfam": "ipv4", 00:25:11.144 "trsvcid": "4420", 00:25:11.144 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:11.144 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:11.144 "prchk_reftag": false, 00:25:11.144 "prchk_guard": false, 00:25:11.144 "hdgst": false, 00:25:11.144 "ddgst": false, 00:25:11.144 "psk": ":spdk-test:key1", 00:25:11.144 "allow_unrecognized_csi": false, 00:25:11.144 "method": "bdev_nvme_attach_controller", 00:25:11.144 "req_id": 1 00:25:11.144 } 00:25:11.144 Got JSON-RPC error response 00:25:11.144 response: 00:25:11.144 { 00:25:11.144 "code": -5, 00:25:11.144 "message": "Input/output error" 00:25:11.144 } 00:25:11.144 12:45:16 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:25:11.144 12:45:16 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:11.144 12:45:16 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:11.144 12:45:16 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:11.144 12:45:16 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:25:11.144 12:45:16 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:25:11.144 12:45:16 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:25:11.144 12:45:16 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:25:11.144 12:45:16 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:25:11.144 12:45:16 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:25:11.144 12:45:16 keyring_linux -- keyring/linux.sh@33 -- # sn=384137051 00:25:11.144 12:45:16 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 384137051 00:25:11.144 1 links removed 00:25:11.144 12:45:16 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:25:11.144 12:45:16 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:25:11.144 12:45:16 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:25:11.144 12:45:16 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:25:11.144 12:45:16 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:25:11.144 12:45:16 keyring_linux -- keyring/linux.sh@33 -- # sn=377413947 00:25:11.144 12:45:16 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 377413947 00:25:11.144 1 links removed 00:25:11.144 12:45:16 keyring_linux -- keyring/linux.sh@41 -- # killprocess 100906 00:25:11.144 12:45:16 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 100906 ']' 00:25:11.144 12:45:16 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 100906 00:25:11.144 12:45:16 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:25:11.144 12:45:16 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:11.403 12:45:16 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100906 00:25:11.403 12:45:16 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:11.403 12:45:16 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:11.403 killing process with pid 100906 00:25:11.403 12:45:16 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100906' 00:25:11.403 Received shutdown signal, test time was about 1.000000 seconds 00:25:11.403 00:25:11.403 Latency(us) 00:25:11.403 [2024-11-19T12:45:16.663Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.403 [2024-11-19T12:45:16.663Z] =================================================================================================================== 00:25:11.403 [2024-11-19T12:45:16.663Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:11.403 12:45:16 keyring_linux -- common/autotest_common.sh@969 -- # kill 100906 00:25:11.403 12:45:16 keyring_linux -- common/autotest_common.sh@974 -- # wait 100906 00:25:11.403 12:45:16 keyring_linux -- keyring/linux.sh@42 -- # killprocess 100897 00:25:11.403 12:45:16 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 100897 ']' 00:25:11.403 12:45:16 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 100897 00:25:11.403 12:45:16 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:25:11.403 12:45:16 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:11.403 12:45:16 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100897 00:25:11.403 12:45:16 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:11.403 12:45:16 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:11.403 killing process with pid 100897 00:25:11.403 12:45:16 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100897' 00:25:11.403 12:45:16 keyring_linux -- common/autotest_common.sh@969 -- # kill 100897 00:25:11.403 12:45:16 keyring_linux -- common/autotest_common.sh@974 -- # wait 100897 00:25:11.662 00:25:11.662 real 0m4.755s 00:25:11.662 user 0m9.659s 00:25:11.662 sys 0m1.342s 00:25:11.662 12:45:16 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:11.662 12:45:16 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:11.662 ************************************ 00:25:11.662 END TEST keyring_linux 00:25:11.662 ************************************ 00:25:11.662 12:45:16 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:25:11.662 12:45:16 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:25:11.662 12:45:16 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:25:11.662 12:45:16 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:25:11.662 12:45:16 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:25:11.662 12:45:16 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:25:11.662 12:45:16 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:25:11.662 12:45:16 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:25:11.662 12:45:16 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:25:11.662 12:45:16 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:25:11.662 12:45:16 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:25:11.662 12:45:16 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:25:11.662 12:45:16 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:25:11.662 12:45:16 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:25:11.662 12:45:16 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:25:11.662 12:45:16 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:25:11.662 12:45:16 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:25:11.662 12:45:16 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:11.662 12:45:16 -- common/autotest_common.sh@10 -- # set +x 00:25:11.662 12:45:16 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:25:11.663 12:45:16 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:25:11.663 12:45:16 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:25:11.663 12:45:16 -- common/autotest_common.sh@10 -- # set +x 00:25:13.567 INFO: APP EXITING 00:25:13.567 INFO: killing all VMs 00:25:13.567 INFO: killing vhost app 00:25:13.567 INFO: EXIT DONE 00:25:14.140 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:14.401 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:25:14.401 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:25:14.969 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:14.969 Cleaning 00:25:14.969 Removing: /var/run/dpdk/spdk0/config 00:25:14.969 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:25:14.969 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:25:14.969 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:25:14.969 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:25:14.969 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:25:14.969 Removing: /var/run/dpdk/spdk0/hugepage_info 00:25:14.969 Removing: /var/run/dpdk/spdk1/config 00:25:14.969 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:25:14.969 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:25:14.969 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:25:14.969 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:25:14.969 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:25:14.969 Removing: /var/run/dpdk/spdk1/hugepage_info 00:25:14.969 Removing: /var/run/dpdk/spdk2/config 00:25:14.969 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:25:14.969 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:25:14.969 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:25:14.969 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:25:14.969 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:25:14.969 Removing: /var/run/dpdk/spdk2/hugepage_info 00:25:14.969 Removing: /var/run/dpdk/spdk3/config 00:25:14.969 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:25:14.969 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:25:14.969 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:25:14.969 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:25:14.969 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:25:15.228 Removing: /var/run/dpdk/spdk3/hugepage_info 00:25:15.228 Removing: /var/run/dpdk/spdk4/config 00:25:15.228 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:25:15.228 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:25:15.228 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:25:15.228 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:25:15.228 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:25:15.228 Removing: /var/run/dpdk/spdk4/hugepage_info 00:25:15.228 Removing: /dev/shm/nvmf_trace.0 00:25:15.228 Removing: /dev/shm/spdk_tgt_trace.pid69940 00:25:15.228 Removing: /var/run/dpdk/spdk0 00:25:15.228 Removing: /var/run/dpdk/spdk1 00:25:15.228 Removing: /var/run/dpdk/spdk2 00:25:15.228 Removing: /var/run/dpdk/spdk3 00:25:15.228 Removing: /var/run/dpdk/spdk4 00:25:15.228 Removing: /var/run/dpdk/spdk_pid100023 00:25:15.228 Removing: /var/run/dpdk/spdk_pid100054 00:25:15.228 Removing: /var/run/dpdk/spdk_pid100524 00:25:15.228 Removing: /var/run/dpdk/spdk_pid100532 00:25:15.228 Removing: /var/run/dpdk/spdk_pid100772 00:25:15.228 Removing: /var/run/dpdk/spdk_pid100897 00:25:15.228 Removing: /var/run/dpdk/spdk_pid100906 00:25:15.228 Removing: /var/run/dpdk/spdk_pid69792 00:25:15.228 Removing: /var/run/dpdk/spdk_pid69940 00:25:15.228 Removing: /var/run/dpdk/spdk_pid70133 00:25:15.228 Removing: /var/run/dpdk/spdk_pid70219 00:25:15.228 Removing: /var/run/dpdk/spdk_pid70234 00:25:15.228 Removing: /var/run/dpdk/spdk_pid70343 00:25:15.228 Removing: /var/run/dpdk/spdk_pid70354 00:25:15.228 Removing: /var/run/dpdk/spdk_pid70488 00:25:15.228 Removing: /var/run/dpdk/spdk_pid70683 00:25:15.228 Removing: /var/run/dpdk/spdk_pid70832 00:25:15.228 Removing: /var/run/dpdk/spdk_pid70910 00:25:15.228 Removing: /var/run/dpdk/spdk_pid70981 00:25:15.228 Removing: /var/run/dpdk/spdk_pid71067 00:25:15.228 Removing: /var/run/dpdk/spdk_pid71139 00:25:15.228 Removing: /var/run/dpdk/spdk_pid71177 00:25:15.228 Removing: /var/run/dpdk/spdk_pid71213 00:25:15.228 Removing: /var/run/dpdk/spdk_pid71277 00:25:15.228 Removing: /var/run/dpdk/spdk_pid71374 00:25:15.228 Removing: /var/run/dpdk/spdk_pid71820 00:25:15.228 Removing: /var/run/dpdk/spdk_pid71869 00:25:15.228 Removing: /var/run/dpdk/spdk_pid71911 00:25:15.228 Removing: /var/run/dpdk/spdk_pid71914 00:25:15.228 Removing: /var/run/dpdk/spdk_pid71981 00:25:15.228 Removing: /var/run/dpdk/spdk_pid71997 00:25:15.228 Removing: /var/run/dpdk/spdk_pid72059 00:25:15.228 Removing: /var/run/dpdk/spdk_pid72067 00:25:15.228 Removing: /var/run/dpdk/spdk_pid72113 00:25:15.228 Removing: /var/run/dpdk/spdk_pid72123 00:25:15.228 Removing: /var/run/dpdk/spdk_pid72163 00:25:15.228 Removing: /var/run/dpdk/spdk_pid72168 00:25:15.228 Removing: /var/run/dpdk/spdk_pid72299 00:25:15.228 Removing: /var/run/dpdk/spdk_pid72334 00:25:15.228 Removing: /var/run/dpdk/spdk_pid72417 00:25:15.228 Removing: /var/run/dpdk/spdk_pid72738 00:25:15.228 Removing: /var/run/dpdk/spdk_pid72755 00:25:15.228 Removing: /var/run/dpdk/spdk_pid72786 00:25:15.228 Removing: /var/run/dpdk/spdk_pid72800 00:25:15.228 Removing: /var/run/dpdk/spdk_pid72815 00:25:15.228 Removing: /var/run/dpdk/spdk_pid72834 00:25:15.228 Removing: /var/run/dpdk/spdk_pid72848 00:25:15.228 Removing: /var/run/dpdk/spdk_pid72862 00:25:15.228 Removing: /var/run/dpdk/spdk_pid72877 00:25:15.228 Removing: /var/run/dpdk/spdk_pid72896 00:25:15.228 Removing: /var/run/dpdk/spdk_pid72906 00:25:15.228 Removing: /var/run/dpdk/spdk_pid72925 00:25:15.228 Removing: /var/run/dpdk/spdk_pid72938 00:25:15.228 Removing: /var/run/dpdk/spdk_pid72954 00:25:15.228 Removing: /var/run/dpdk/spdk_pid72973 00:25:15.228 Removing: /var/run/dpdk/spdk_pid72981 00:25:15.228 Removing: /var/run/dpdk/spdk_pid73002 00:25:15.228 Removing: /var/run/dpdk/spdk_pid73021 00:25:15.228 Removing: /var/run/dpdk/spdk_pid73029 00:25:15.228 Removing: /var/run/dpdk/spdk_pid73050 00:25:15.228 Removing: /var/run/dpdk/spdk_pid73075 00:25:15.228 Removing: /var/run/dpdk/spdk_pid73094 00:25:15.228 Removing: /var/run/dpdk/spdk_pid73118 00:25:15.228 Removing: /var/run/dpdk/spdk_pid73190 00:25:15.228 Removing: /var/run/dpdk/spdk_pid73213 00:25:15.228 Removing: /var/run/dpdk/spdk_pid73228 00:25:15.228 Removing: /var/run/dpdk/spdk_pid73251 00:25:15.228 Removing: /var/run/dpdk/spdk_pid73266 00:25:15.228 Removing: /var/run/dpdk/spdk_pid73268 00:25:15.228 Removing: /var/run/dpdk/spdk_pid73305 00:25:15.228 Removing: /var/run/dpdk/spdk_pid73324 00:25:15.487 Removing: /var/run/dpdk/spdk_pid73347 00:25:15.487 Removing: /var/run/dpdk/spdk_pid73362 00:25:15.487 Removing: /var/run/dpdk/spdk_pid73366 00:25:15.487 Removing: /var/run/dpdk/spdk_pid73370 00:25:15.487 Removing: /var/run/dpdk/spdk_pid73385 00:25:15.487 Removing: /var/run/dpdk/spdk_pid73389 00:25:15.487 Removing: /var/run/dpdk/spdk_pid73393 00:25:15.487 Removing: /var/run/dpdk/spdk_pid73408 00:25:15.487 Removing: /var/run/dpdk/spdk_pid73431 00:25:15.487 Removing: /var/run/dpdk/spdk_pid73463 00:25:15.487 Removing: /var/run/dpdk/spdk_pid73467 00:25:15.487 Removing: /var/run/dpdk/spdk_pid73496 00:25:15.487 Removing: /var/run/dpdk/spdk_pid73505 00:25:15.487 Removing: /var/run/dpdk/spdk_pid73507 00:25:15.487 Removing: /var/run/dpdk/spdk_pid73553 00:25:15.487 Removing: /var/run/dpdk/spdk_pid73559 00:25:15.487 Removing: /var/run/dpdk/spdk_pid73591 00:25:15.487 Removing: /var/run/dpdk/spdk_pid73593 00:25:15.487 Removing: /var/run/dpdk/spdk_pid73601 00:25:15.487 Removing: /var/run/dpdk/spdk_pid73608 00:25:15.487 Removing: /var/run/dpdk/spdk_pid73610 00:25:15.487 Removing: /var/run/dpdk/spdk_pid73623 00:25:15.487 Removing: /var/run/dpdk/spdk_pid73625 00:25:15.487 Removing: /var/run/dpdk/spdk_pid73633 00:25:15.487 Removing: /var/run/dpdk/spdk_pid73709 00:25:15.488 Removing: /var/run/dpdk/spdk_pid73751 00:25:15.488 Removing: /var/run/dpdk/spdk_pid73863 00:25:15.488 Removing: /var/run/dpdk/spdk_pid73897 00:25:15.488 Removing: /var/run/dpdk/spdk_pid73937 00:25:15.488 Removing: /var/run/dpdk/spdk_pid73957 00:25:15.488 Removing: /var/run/dpdk/spdk_pid73973 00:25:15.488 Removing: /var/run/dpdk/spdk_pid73988 00:25:15.488 Removing: /var/run/dpdk/spdk_pid74019 00:25:15.488 Removing: /var/run/dpdk/spdk_pid74035 00:25:15.488 Removing: /var/run/dpdk/spdk_pid74113 00:25:15.488 Removing: /var/run/dpdk/spdk_pid74129 00:25:15.488 Removing: /var/run/dpdk/spdk_pid74167 00:25:15.488 Removing: /var/run/dpdk/spdk_pid74224 00:25:15.488 Removing: /var/run/dpdk/spdk_pid74280 00:25:15.488 Removing: /var/run/dpdk/spdk_pid74302 00:25:15.488 Removing: /var/run/dpdk/spdk_pid74404 00:25:15.488 Removing: /var/run/dpdk/spdk_pid74441 00:25:15.488 Removing: /var/run/dpdk/spdk_pid74479 00:25:15.488 Removing: /var/run/dpdk/spdk_pid74700 00:25:15.488 Removing: /var/run/dpdk/spdk_pid74792 00:25:15.488 Removing: /var/run/dpdk/spdk_pid74815 00:25:15.488 Removing: /var/run/dpdk/spdk_pid74850 00:25:15.488 Removing: /var/run/dpdk/spdk_pid74878 00:25:15.488 Removing: /var/run/dpdk/spdk_pid74917 00:25:15.488 Removing: /var/run/dpdk/spdk_pid74945 00:25:15.488 Removing: /var/run/dpdk/spdk_pid74982 00:25:15.488 Removing: /var/run/dpdk/spdk_pid75369 00:25:15.488 Removing: /var/run/dpdk/spdk_pid75409 00:25:15.488 Removing: /var/run/dpdk/spdk_pid75752 00:25:15.488 Removing: /var/run/dpdk/spdk_pid76212 00:25:15.488 Removing: /var/run/dpdk/spdk_pid76482 00:25:15.488 Removing: /var/run/dpdk/spdk_pid77333 00:25:15.488 Removing: /var/run/dpdk/spdk_pid78239 00:25:15.488 Removing: /var/run/dpdk/spdk_pid78357 00:25:15.488 Removing: /var/run/dpdk/spdk_pid78425 00:25:15.488 Removing: /var/run/dpdk/spdk_pid79826 00:25:15.488 Removing: /var/run/dpdk/spdk_pid80127 00:25:15.488 Removing: /var/run/dpdk/spdk_pid83846 00:25:15.488 Removing: /var/run/dpdk/spdk_pid84217 00:25:15.488 Removing: /var/run/dpdk/spdk_pid84326 00:25:15.488 Removing: /var/run/dpdk/spdk_pid84453 00:25:15.488 Removing: /var/run/dpdk/spdk_pid84474 00:25:15.488 Removing: /var/run/dpdk/spdk_pid84495 00:25:15.488 Removing: /var/run/dpdk/spdk_pid84516 00:25:15.488 Removing: /var/run/dpdk/spdk_pid84605 00:25:15.488 Removing: /var/run/dpdk/spdk_pid84729 00:25:15.488 Removing: /var/run/dpdk/spdk_pid84876 00:25:15.488 Removing: /var/run/dpdk/spdk_pid84946 00:25:15.488 Removing: /var/run/dpdk/spdk_pid85133 00:25:15.488 Removing: /var/run/dpdk/spdk_pid85201 00:25:15.488 Removing: /var/run/dpdk/spdk_pid85281 00:25:15.488 Removing: /var/run/dpdk/spdk_pid85629 00:25:15.488 Removing: /var/run/dpdk/spdk_pid86027 00:25:15.488 Removing: /var/run/dpdk/spdk_pid86028 00:25:15.488 Removing: /var/run/dpdk/spdk_pid86029 00:25:15.488 Removing: /var/run/dpdk/spdk_pid86289 00:25:15.488 Removing: /var/run/dpdk/spdk_pid86532 00:25:15.488 Removing: /var/run/dpdk/spdk_pid86539 00:25:15.488 Removing: /var/run/dpdk/spdk_pid88907 00:25:15.488 Removing: /var/run/dpdk/spdk_pid88909 00:25:15.488 Removing: /var/run/dpdk/spdk_pid89229 00:25:15.488 Removing: /var/run/dpdk/spdk_pid89249 00:25:15.747 Removing: /var/run/dpdk/spdk_pid89263 00:25:15.747 Removing: /var/run/dpdk/spdk_pid89288 00:25:15.747 Removing: /var/run/dpdk/spdk_pid89299 00:25:15.747 Removing: /var/run/dpdk/spdk_pid89393 00:25:15.747 Removing: /var/run/dpdk/spdk_pid89405 00:25:15.747 Removing: /var/run/dpdk/spdk_pid89509 00:25:15.747 Removing: /var/run/dpdk/spdk_pid89515 00:25:15.747 Removing: /var/run/dpdk/spdk_pid89628 00:25:15.747 Removing: /var/run/dpdk/spdk_pid89630 00:25:15.747 Removing: /var/run/dpdk/spdk_pid90078 00:25:15.747 Removing: /var/run/dpdk/spdk_pid90121 00:25:15.747 Removing: /var/run/dpdk/spdk_pid90230 00:25:15.747 Removing: /var/run/dpdk/spdk_pid90309 00:25:15.747 Removing: /var/run/dpdk/spdk_pid90669 00:25:15.747 Removing: /var/run/dpdk/spdk_pid90858 00:25:15.747 Removing: /var/run/dpdk/spdk_pid91278 00:25:15.747 Removing: /var/run/dpdk/spdk_pid91831 00:25:15.747 Removing: /var/run/dpdk/spdk_pid92691 00:25:15.747 Removing: /var/run/dpdk/spdk_pid93320 00:25:15.747 Removing: /var/run/dpdk/spdk_pid93328 00:25:15.747 Removing: /var/run/dpdk/spdk_pid95346 00:25:15.747 Removing: /var/run/dpdk/spdk_pid95393 00:25:15.747 Removing: /var/run/dpdk/spdk_pid95440 00:25:15.747 Removing: /var/run/dpdk/spdk_pid95494 00:25:15.747 Removing: /var/run/dpdk/spdk_pid95603 00:25:15.747 Removing: /var/run/dpdk/spdk_pid95650 00:25:15.747 Removing: /var/run/dpdk/spdk_pid95705 00:25:15.747 Removing: /var/run/dpdk/spdk_pid95765 00:25:15.747 Removing: /var/run/dpdk/spdk_pid96116 00:25:15.747 Removing: /var/run/dpdk/spdk_pid97326 00:25:15.747 Removing: /var/run/dpdk/spdk_pid97467 00:25:15.747 Removing: /var/run/dpdk/spdk_pid97702 00:25:15.747 Removing: /var/run/dpdk/spdk_pid98290 00:25:15.747 Removing: /var/run/dpdk/spdk_pid98444 00:25:15.747 Removing: /var/run/dpdk/spdk_pid98601 00:25:15.747 Removing: /var/run/dpdk/spdk_pid98698 00:25:15.747 Removing: /var/run/dpdk/spdk_pid98861 00:25:15.747 Removing: /var/run/dpdk/spdk_pid98970 00:25:15.747 Removing: /var/run/dpdk/spdk_pid99668 00:25:15.747 Removing: /var/run/dpdk/spdk_pid99702 00:25:15.747 Removing: /var/run/dpdk/spdk_pid99733 00:25:15.747 Removing: /var/run/dpdk/spdk_pid99989 00:25:15.747 Clean 00:25:15.747 12:45:20 -- common/autotest_common.sh@1451 -- # return 0 00:25:15.747 12:45:20 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:25:15.747 12:45:20 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:15.747 12:45:20 -- common/autotest_common.sh@10 -- # set +x 00:25:15.747 12:45:20 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:25:15.747 12:45:20 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:15.747 12:45:20 -- common/autotest_common.sh@10 -- # set +x 00:25:16.006 12:45:21 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:16.006 12:45:21 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:25:16.006 12:45:21 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:25:16.006 12:45:21 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:25:16.006 12:45:21 -- spdk/autotest.sh@394 -- # hostname 00:25:16.006 12:45:21 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:25:16.263 geninfo: WARNING: invalid characters removed from testname! 00:25:38.194 12:45:43 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:41.485 12:45:46 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:44.030 12:45:48 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:46.566 12:45:51 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:49.098 12:45:53 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:51.631 12:45:56 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:54.164 12:45:59 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:25:54.164 12:45:59 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:25:54.164 12:45:59 -- common/autotest_common.sh@1681 -- $ lcov --version 00:25:54.164 12:45:59 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:25:54.164 12:45:59 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:25:54.164 12:45:59 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:25:54.164 12:45:59 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:25:54.164 12:45:59 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:25:54.164 12:45:59 -- scripts/common.sh@336 -- $ IFS=.-: 00:25:54.164 12:45:59 -- scripts/common.sh@336 -- $ read -ra ver1 00:25:54.164 12:45:59 -- scripts/common.sh@337 -- $ IFS=.-: 00:25:54.164 12:45:59 -- scripts/common.sh@337 -- $ read -ra ver2 00:25:54.164 12:45:59 -- scripts/common.sh@338 -- $ local 'op=<' 00:25:54.164 12:45:59 -- scripts/common.sh@340 -- $ ver1_l=2 00:25:54.164 12:45:59 -- scripts/common.sh@341 -- $ ver2_l=1 00:25:54.164 12:45:59 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:25:54.164 12:45:59 -- scripts/common.sh@344 -- $ case "$op" in 00:25:54.164 12:45:59 -- scripts/common.sh@345 -- $ : 1 00:25:54.164 12:45:59 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:25:54.164 12:45:59 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:54.164 12:45:59 -- scripts/common.sh@365 -- $ decimal 1 00:25:54.164 12:45:59 -- scripts/common.sh@353 -- $ local d=1 00:25:54.164 12:45:59 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:25:54.164 12:45:59 -- scripts/common.sh@355 -- $ echo 1 00:25:54.164 12:45:59 -- scripts/common.sh@365 -- $ ver1[v]=1 00:25:54.164 12:45:59 -- scripts/common.sh@366 -- $ decimal 2 00:25:54.164 12:45:59 -- scripts/common.sh@353 -- $ local d=2 00:25:54.164 12:45:59 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:25:54.164 12:45:59 -- scripts/common.sh@355 -- $ echo 2 00:25:54.164 12:45:59 -- scripts/common.sh@366 -- $ ver2[v]=2 00:25:54.164 12:45:59 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:25:54.164 12:45:59 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:25:54.164 12:45:59 -- scripts/common.sh@368 -- $ return 0 00:25:54.164 12:45:59 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:54.164 12:45:59 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:25:54.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:54.164 --rc genhtml_branch_coverage=1 00:25:54.164 --rc genhtml_function_coverage=1 00:25:54.164 --rc genhtml_legend=1 00:25:54.164 --rc geninfo_all_blocks=1 00:25:54.164 --rc geninfo_unexecuted_blocks=1 00:25:54.164 00:25:54.164 ' 00:25:54.164 12:45:59 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:25:54.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:54.164 --rc genhtml_branch_coverage=1 00:25:54.164 --rc genhtml_function_coverage=1 00:25:54.164 --rc genhtml_legend=1 00:25:54.164 --rc geninfo_all_blocks=1 00:25:54.164 --rc geninfo_unexecuted_blocks=1 00:25:54.164 00:25:54.164 ' 00:25:54.164 12:45:59 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:25:54.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:54.164 --rc genhtml_branch_coverage=1 00:25:54.164 --rc genhtml_function_coverage=1 00:25:54.164 --rc genhtml_legend=1 00:25:54.164 --rc geninfo_all_blocks=1 00:25:54.164 --rc geninfo_unexecuted_blocks=1 00:25:54.164 00:25:54.164 ' 00:25:54.164 12:45:59 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:25:54.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:54.164 --rc genhtml_branch_coverage=1 00:25:54.164 --rc genhtml_function_coverage=1 00:25:54.164 --rc genhtml_legend=1 00:25:54.164 --rc geninfo_all_blocks=1 00:25:54.164 --rc geninfo_unexecuted_blocks=1 00:25:54.164 00:25:54.164 ' 00:25:54.164 12:45:59 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:54.164 12:45:59 -- scripts/common.sh@15 -- $ shopt -s extglob 00:25:54.164 12:45:59 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:25:54.164 12:45:59 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:54.164 12:45:59 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:54.164 12:45:59 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.164 12:45:59 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.164 12:45:59 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.164 12:45:59 -- paths/export.sh@5 -- $ export PATH 00:25:54.164 12:45:59 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.164 12:45:59 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:25:54.164 12:45:59 -- common/autobuild_common.sh@479 -- $ date +%s 00:25:54.164 12:45:59 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1732020359.XXXXXX 00:25:54.164 12:45:59 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1732020359.mg84xw 00:25:54.164 12:45:59 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:25:54.164 12:45:59 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:25:54.164 12:45:59 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:25:54.164 12:45:59 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:25:54.164 12:45:59 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:25:54.164 12:45:59 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:25:54.164 12:45:59 -- common/autobuild_common.sh@495 -- $ get_config_params 00:25:54.164 12:45:59 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:25:54.164 12:45:59 -- common/autotest_common.sh@10 -- $ set +x 00:25:54.164 12:45:59 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:25:54.164 12:45:59 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:25:54.165 12:45:59 -- pm/common@17 -- $ local monitor 00:25:54.165 12:45:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:54.165 12:45:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:54.165 12:45:59 -- pm/common@25 -- $ sleep 1 00:25:54.165 12:45:59 -- pm/common@21 -- $ date +%s 00:25:54.165 12:45:59 -- pm/common@21 -- $ date +%s 00:25:54.165 12:45:59 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1732020359 00:25:54.165 12:45:59 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1732020359 00:25:54.165 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1732020359_collect-cpu-load.pm.log 00:25:54.165 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1732020359_collect-vmstat.pm.log 00:25:55.101 12:46:00 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:25:55.101 12:46:00 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:25:55.101 12:46:00 -- spdk/autopackage.sh@14 -- $ timing_finish 00:25:55.101 12:46:00 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:25:55.101 12:46:00 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:25:55.101 12:46:00 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:55.360 12:46:00 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:25:55.360 12:46:00 -- pm/common@29 -- $ signal_monitor_resources TERM 00:25:55.360 12:46:00 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:25:55.360 12:46:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:55.360 12:46:00 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:25:55.360 12:46:00 -- pm/common@44 -- $ pid=102702 00:25:55.360 12:46:00 -- pm/common@50 -- $ kill -TERM 102702 00:25:55.360 12:46:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:55.360 12:46:00 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:25:55.360 12:46:00 -- pm/common@44 -- $ pid=102703 00:25:55.360 12:46:00 -- pm/common@50 -- $ kill -TERM 102703 00:25:55.360 + [[ -n 6003 ]] 00:25:55.360 + sudo kill 6003 00:25:55.370 [Pipeline] } 00:25:55.389 [Pipeline] // timeout 00:25:55.395 [Pipeline] } 00:25:55.411 [Pipeline] // stage 00:25:55.417 [Pipeline] } 00:25:55.431 [Pipeline] // catchError 00:25:55.442 [Pipeline] stage 00:25:55.445 [Pipeline] { (Stop VM) 00:25:55.459 [Pipeline] sh 00:25:55.744 + vagrant halt 00:25:59.052 ==> default: Halting domain... 00:26:05.683 [Pipeline] sh 00:26:05.963 + vagrant destroy -f 00:26:08.495 ==> default: Removing domain... 00:26:08.767 [Pipeline] sh 00:26:09.049 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:26:09.058 [Pipeline] } 00:26:09.077 [Pipeline] // stage 00:26:09.083 [Pipeline] } 00:26:09.098 [Pipeline] // dir 00:26:09.105 [Pipeline] } 00:26:09.120 [Pipeline] // wrap 00:26:09.126 [Pipeline] } 00:26:09.139 [Pipeline] // catchError 00:26:09.149 [Pipeline] stage 00:26:09.151 [Pipeline] { (Epilogue) 00:26:09.165 [Pipeline] sh 00:26:09.447 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:26:14.843 [Pipeline] catchError 00:26:14.845 [Pipeline] { 00:26:14.858 [Pipeline] sh 00:26:15.139 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:26:15.398 Artifacts sizes are good 00:26:15.406 [Pipeline] } 00:26:15.419 [Pipeline] // catchError 00:26:15.429 [Pipeline] archiveArtifacts 00:26:15.435 Archiving artifacts 00:26:15.562 [Pipeline] cleanWs 00:26:15.575 [WS-CLEANUP] Deleting project workspace... 00:26:15.575 [WS-CLEANUP] Deferred wipeout is used... 00:26:15.581 [WS-CLEANUP] done 00:26:15.583 [Pipeline] } 00:26:15.600 [Pipeline] // stage 00:26:15.607 [Pipeline] } 00:26:15.623 [Pipeline] // node 00:26:15.629 [Pipeline] End of Pipeline 00:26:15.683 Finished: SUCCESS